00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1750 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3011 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.014 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.015 The recommended git tool is: git 00:00:00.016 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.033 Fetching changes from the remote Git repository 00:00:00.035 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.050 Using shallow fetch with depth 1 00:00:00.050 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.050 > git --version # timeout=10 00:00:00.071 > git --version # 'git version 2.39.2' 00:00:00.071 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.072 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.072 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.317 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.330 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.342 Checking out Revision 6201031def5bfb7f90a861bc162998684798607e (FETCH_HEAD) 00:00:02.342 > git config core.sparsecheckout # timeout=10 00:00:02.353 > git read-tree -mu HEAD # timeout=10 00:00:02.370 > git checkout -f 6201031def5bfb7f90a861bc162998684798607e # timeout=5 00:00:02.392 Commit message: "scripts/kid: Add issue 3354" 00:00:02.392 > git rev-list --no-walk 6201031def5bfb7f90a861bc162998684798607e # timeout=10 00:00:02.551 [Pipeline] Start of Pipeline 00:00:02.565 [Pipeline] library 00:00:02.567 Loading library shm_lib@master 00:00:02.567 Library shm_lib@master is cached. Copying from home. 00:00:02.584 [Pipeline] node 00:00:02.601 Running on FCP03 in /var/jenkins/workspace/dsa-phy-autotest 00:00:02.603 [Pipeline] { 00:00:02.618 [Pipeline] catchError 00:00:02.620 [Pipeline] { 00:00:02.638 [Pipeline] wrap 00:00:02.651 [Pipeline] { 00:00:02.661 [Pipeline] stage 00:00:02.664 [Pipeline] { (Prologue) 00:00:02.880 [Pipeline] sh 00:00:03.160 + logger -p user.info -t JENKINS-CI 00:00:03.180 [Pipeline] echo 00:00:03.182 Node: FCP03 00:00:03.192 [Pipeline] sh 00:00:03.488 [Pipeline] setCustomBuildProperty 00:00:03.498 [Pipeline] echo 00:00:03.499 Cleanup processes 00:00:03.503 [Pipeline] sh 00:00:03.784 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:03.784 1217830 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:03.797 [Pipeline] sh 00:00:04.080 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.080 ++ grep -v 'sudo pgrep' 00:00:04.080 ++ awk '{print $1}' 00:00:04.080 + sudo kill -9 00:00:04.080 + true 00:00:04.092 [Pipeline] cleanWs 00:00:04.100 [WS-CLEANUP] Deleting project workspace... 00:00:04.100 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.106 [WS-CLEANUP] done 00:00:04.109 [Pipeline] setCustomBuildProperty 00:00:04.120 [Pipeline] sh 00:00:04.400 + sudo git config --global --replace-all safe.directory '*' 00:00:04.466 [Pipeline] nodesByLabel 00:00:04.467 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.476 [Pipeline] httpRequest 00:00:04.481 HttpMethod: GET 00:00:04.481 URL: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:04.484 Sending request to url: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:04.492 Response Code: HTTP/1.1 200 OK 00:00:04.492 Success: Status code 200 is in the accepted range: 200,404 00:00:04.493 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:05.146 [Pipeline] sh 00:00:05.428 + tar --no-same-owner -xf jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:05.444 [Pipeline] httpRequest 00:00:05.449 HttpMethod: GET 00:00:05.450 URL: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:05.451 Sending request to url: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:05.470 Response Code: HTTP/1.1 200 OK 00:00:05.470 Success: Status code 200 is in the accepted range: 200,404 00:00:05.471 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:23.280 [Pipeline] sh 00:01:23.566 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:26.123 [Pipeline] sh 00:01:26.404 + git -C spdk log --oneline -n5 00:01:26.404 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:26.404 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:26.404 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:26.404 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:26.404 3b33f4333 test/nvme/cuse: Fix typo 00:01:26.415 [Pipeline] } 00:01:26.430 [Pipeline] // stage 00:01:26.438 [Pipeline] stage 00:01:26.440 [Pipeline] { (Prepare) 00:01:26.458 [Pipeline] writeFile 00:01:26.476 [Pipeline] sh 00:01:26.759 + logger -p user.info -t JENKINS-CI 00:01:26.770 [Pipeline] sh 00:01:27.051 + logger -p user.info -t JENKINS-CI 00:01:27.062 [Pipeline] sh 00:01:27.343 + cat autorun-spdk.conf 00:01:27.343 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.343 SPDK_TEST_ACCEL_DSA=1 00:01:27.343 SPDK_TEST_ACCEL_IAA=1 00:01:27.343 SPDK_TEST_NVMF=1 00:01:27.343 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.343 SPDK_RUN_ASAN=1 00:01:27.343 SPDK_RUN_UBSAN=1 00:01:27.351 RUN_NIGHTLY=1 00:01:27.355 [Pipeline] readFile 00:01:27.378 [Pipeline] withEnv 00:01:27.380 [Pipeline] { 00:01:27.394 [Pipeline] sh 00:01:27.732 + set -ex 00:01:27.732 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:01:27.732 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:01:27.732 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.732 ++ SPDK_TEST_ACCEL_DSA=1 00:01:27.732 ++ SPDK_TEST_ACCEL_IAA=1 00:01:27.732 ++ SPDK_TEST_NVMF=1 00:01:27.732 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.732 ++ SPDK_RUN_ASAN=1 00:01:27.732 ++ SPDK_RUN_UBSAN=1 00:01:27.732 ++ RUN_NIGHTLY=1 00:01:27.732 + case $SPDK_TEST_NVMF_NICS in 00:01:27.732 + DRIVERS= 00:01:27.732 + [[ -n '' ]] 00:01:27.732 + exit 0 00:01:27.742 [Pipeline] } 00:01:27.760 [Pipeline] // withEnv 00:01:27.766 [Pipeline] } 00:01:27.782 [Pipeline] // stage 00:01:27.791 [Pipeline] catchError 00:01:27.793 [Pipeline] { 00:01:27.808 [Pipeline] timeout 00:01:27.808 Timeout set to expire in 50 min 00:01:27.811 [Pipeline] { 00:01:27.826 [Pipeline] stage 00:01:27.828 [Pipeline] { (Tests) 00:01:27.844 [Pipeline] sh 00:01:28.127 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:01:28.127 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:01:28.127 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:01:28.127 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:01:28.127 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:28.127 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:01:28.127 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:01:28.127 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:01:28.127 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:01:28.127 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:01:28.127 + cd /var/jenkins/workspace/dsa-phy-autotest 00:01:28.127 + source /etc/os-release 00:01:28.127 ++ NAME='Fedora Linux' 00:01:28.127 ++ VERSION='38 (Cloud Edition)' 00:01:28.127 ++ ID=fedora 00:01:28.127 ++ VERSION_ID=38 00:01:28.127 ++ VERSION_CODENAME= 00:01:28.127 ++ PLATFORM_ID=platform:f38 00:01:28.127 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:28.127 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.127 ++ LOGO=fedora-logo-icon 00:01:28.127 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:28.127 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.127 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:28.127 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.127 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.127 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.127 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:28.127 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.127 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:28.127 ++ SUPPORT_END=2024-05-14 00:01:28.127 ++ VARIANT='Cloud Edition' 00:01:28.127 ++ VARIANT_ID=cloud 00:01:28.127 + uname -a 00:01:28.127 Linux spdk-fcp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:28.127 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:01:30.045 Hugepages 00:01:30.045 node hugesize free / total 00:01:30.045 node0 1048576kB 0 / 0 00:01:30.045 node0 2048kB 0 / 0 00:01:30.045 node1 1048576kB 0 / 0 00:01:30.045 node1 2048kB 0 / 0 00:01:30.045 00:01:30.045 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:30.305 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:01:30.305 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:01:30.305 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:01:30.305 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:01:30.305 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:01:30.305 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:01:30.305 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:01:30.306 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:01:30.306 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:01:30.306 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:01:30.306 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:01:30.306 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:01:30.306 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:01:30.306 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:01:30.306 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:01:30.306 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:01:30.306 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:01:30.306 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:01:30.306 + rm -f /tmp/spdk-ld-path 00:01:30.306 + source autorun-spdk.conf 00:01:30.306 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.306 ++ SPDK_TEST_ACCEL_DSA=1 00:01:30.306 ++ SPDK_TEST_ACCEL_IAA=1 00:01:30.306 ++ SPDK_TEST_NVMF=1 00:01:30.306 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.306 ++ SPDK_RUN_ASAN=1 00:01:30.306 ++ SPDK_RUN_UBSAN=1 00:01:30.306 ++ RUN_NIGHTLY=1 00:01:30.306 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:30.306 + [[ -n '' ]] 00:01:30.306 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:30.306 + for M in /var/spdk/build-*-manifest.txt 00:01:30.306 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:30.306 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:30.306 + for M in /var/spdk/build-*-manifest.txt 00:01:30.306 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:30.306 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:30.306 ++ uname 00:01:30.306 + [[ Linux == \L\i\n\u\x ]] 00:01:30.306 + sudo dmesg -T 00:01:30.306 + sudo dmesg --clear 00:01:30.567 + dmesg_pid=1218838 00:01:30.567 + [[ Fedora Linux == FreeBSD ]] 00:01:30.567 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.567 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.567 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.567 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.567 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.567 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.567 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.567 + sudo dmesg -Tw 00:01:30.567 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.567 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.567 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.567 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.567 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.567 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.567 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.567 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:01:30.567 Test configuration: 00:01:30.567 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.567 SPDK_TEST_ACCEL_DSA=1 00:01:30.567 SPDK_TEST_ACCEL_IAA=1 00:01:30.567 SPDK_TEST_NVMF=1 00:01:30.567 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.567 SPDK_RUN_ASAN=1 00:01:30.567 SPDK_RUN_UBSAN=1 00:01:30.567 RUN_NIGHTLY=1 19:56:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:30.567 19:56:28 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:30.567 19:56:28 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:30.567 19:56:28 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:30.567 19:56:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.567 19:56:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.567 19:56:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.567 19:56:28 -- paths/export.sh@5 -- $ export PATH 00:01:30.567 19:56:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.567 19:56:28 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:30.567 19:56:28 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:30.567 19:56:28 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714067788.XXXXXX 00:01:30.567 19:56:28 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714067788.2ZLZxl 00:01:30.567 19:56:28 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:30.567 19:56:28 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:30.567 19:56:28 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:01:30.567 19:56:28 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:30.567 19:56:28 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:30.567 19:56:28 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:30.567 19:56:28 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:30.567 19:56:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.567 19:56:28 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:30.567 19:56:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.567 19:56:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.567 19:56:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:30.567 19:56:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.567 Thu Apr 25 05:56:28 PM UTC 2024 00:01:30.567 19:56:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.567 LTS-24-g36faa8c31 00:01:30.567 19:56:28 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:30.567 19:56:28 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:30.567 19:56:28 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:30.567 19:56:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:30.567 19:56:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.567 ************************************ 00:01:30.567 START TEST asan 00:01:30.567 ************************************ 00:01:30.567 19:56:28 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:30.567 using asan 00:01:30.567 00:01:30.567 real 0m0.000s 00:01:30.568 user 0m0.000s 00:01:30.568 sys 0m0.000s 00:01:30.568 19:56:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:30.568 19:56:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.568 ************************************ 00:01:30.568 END TEST asan 00:01:30.568 ************************************ 00:01:30.568 19:56:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.568 19:56:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.568 19:56:28 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:30.568 19:56:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:30.568 19:56:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.568 ************************************ 00:01:30.568 START TEST ubsan 00:01:30.568 ************************************ 00:01:30.568 19:56:28 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:30.568 using ubsan 00:01:30.568 00:01:30.568 real 0m0.000s 00:01:30.568 user 0m0.000s 00:01:30.568 sys 0m0.000s 00:01:30.568 19:56:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:30.568 19:56:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.568 ************************************ 00:01:30.568 END TEST ubsan 00:01:30.568 ************************************ 00:01:30.568 19:56:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:30.568 19:56:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:30.568 19:56:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:30.568 19:56:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:30.568 19:56:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:30.568 19:56:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:30.568 19:56:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:30.568 19:56:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:30.568 19:56:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:30.568 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:01:30.568 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:31.138 Using 'verbs' RDMA provider 00:01:41.710 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:51.707 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:52.279 Creating mk/config.mk...done. 00:01:52.279 Creating mk/cc.flags.mk...done. 00:01:52.279 Type 'make' to build. 00:01:52.279 19:56:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:52.279 19:56:49 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:52.279 19:56:49 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:52.279 19:56:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.279 ************************************ 00:01:52.279 START TEST make 00:01:52.279 ************************************ 00:01:52.279 19:56:49 -- common/autotest_common.sh@1104 -- $ make -j128 00:01:52.279 make[1]: Nothing to be done for 'all'. 00:01:57.549 The Meson build system 00:01:57.549 Version: 1.3.1 00:01:57.549 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:01:57.549 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:01:57.549 Build type: native build 00:01:57.549 Program cat found: YES (/usr/bin/cat) 00:01:57.549 Project name: DPDK 00:01:57.549 Project version: 23.11.0 00:01:57.549 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:57.549 C linker for the host machine: cc ld.bfd 2.39-16 00:01:57.549 Host machine cpu family: x86_64 00:01:57.549 Host machine cpu: x86_64 00:01:57.549 Message: ## Building in Developer Mode ## 00:01:57.549 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.549 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.549 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.549 Program python3 found: YES (/usr/bin/python3) 00:01:57.549 Program cat found: YES (/usr/bin/cat) 00:01:57.549 Compiler for C supports arguments -march=native: YES 00:01:57.549 Checking for size of "void *" : 8 00:01:57.549 Checking for size of "void *" : 8 (cached) 00:01:57.549 Library m found: YES 00:01:57.549 Library numa found: YES 00:01:57.549 Has header "numaif.h" : YES 00:01:57.549 Library fdt found: NO 00:01:57.549 Library execinfo found: NO 00:01:57.549 Has header "execinfo.h" : YES 00:01:57.549 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:57.549 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.549 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.549 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.549 Run-time dependency openssl found: YES 3.0.9 00:01:57.549 Run-time dependency libpcap found: YES 1.10.4 00:01:57.549 Has header "pcap.h" with dependency libpcap: YES 00:01:57.549 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.549 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.549 Compiler for C supports arguments -Wformat: YES 00:01:57.549 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.549 Compiler for C supports arguments -Wformat-security: NO 00:01:57.549 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.549 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.549 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.549 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.549 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.549 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.549 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.549 Compiler for C supports arguments -Wundef: YES 00:01:57.549 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.549 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.549 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.549 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.549 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.549 Program objdump found: YES (/usr/bin/objdump) 00:01:57.549 Compiler for C supports arguments -mavx512f: YES 00:01:57.549 Checking if "AVX512 checking" compiles: YES 00:01:57.549 Fetching value of define "__SSE4_2__" : 1 00:01:57.549 Fetching value of define "__AES__" : 1 00:01:57.549 Fetching value of define "__AVX__" : 1 00:01:57.549 Fetching value of define "__AVX2__" : 1 00:01:57.549 Fetching value of define "__AVX512BW__" : 1 00:01:57.549 Fetching value of define "__AVX512CD__" : 1 00:01:57.549 Fetching value of define "__AVX512DQ__" : 1 00:01:57.549 Fetching value of define "__AVX512F__" : 1 00:01:57.549 Fetching value of define "__AVX512VL__" : 1 00:01:57.549 Fetching value of define "__PCLMUL__" : 1 00:01:57.549 Fetching value of define "__RDRND__" : 1 00:01:57.549 Fetching value of define "__RDSEED__" : 1 00:01:57.549 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:57.549 Fetching value of define "__znver1__" : (undefined) 00:01:57.549 Fetching value of define "__znver2__" : (undefined) 00:01:57.549 Fetching value of define "__znver3__" : (undefined) 00:01:57.549 Fetching value of define "__znver4__" : (undefined) 00:01:57.549 Library asan found: YES 00:01:57.549 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.549 Message: lib/log: Defining dependency "log" 00:01:57.549 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.549 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.549 Library rt found: YES 00:01:57.549 Checking for function "getentropy" : NO 00:01:57.549 Message: lib/eal: Defining dependency "eal" 00:01:57.549 Message: lib/ring: Defining dependency "ring" 00:01:57.549 Message: lib/rcu: Defining dependency "rcu" 00:01:57.549 Message: lib/mempool: Defining dependency "mempool" 00:01:57.549 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.549 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.549 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.549 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.549 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.549 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:57.549 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:57.549 Compiler for C supports arguments -mpclmul: YES 00:01:57.549 Compiler for C supports arguments -maes: YES 00:01:57.549 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.549 Compiler for C supports arguments -mavx512bw: YES 00:01:57.549 Compiler for C supports arguments -mavx512dq: YES 00:01:57.549 Compiler for C supports arguments -mavx512vl: YES 00:01:57.549 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.549 Compiler for C supports arguments -mavx2: YES 00:01:57.549 Compiler for C supports arguments -mavx: YES 00:01:57.549 Message: lib/net: Defining dependency "net" 00:01:57.549 Message: lib/meter: Defining dependency "meter" 00:01:57.549 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.549 Message: lib/pci: Defining dependency "pci" 00:01:57.549 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.549 Message: lib/hash: Defining dependency "hash" 00:01:57.549 Message: lib/timer: Defining dependency "timer" 00:01:57.549 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.549 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.549 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.549 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.549 Message: lib/power: Defining dependency "power" 00:01:57.549 Message: lib/reorder: Defining dependency "reorder" 00:01:57.549 Message: lib/security: Defining dependency "security" 00:01:57.549 Has header "linux/userfaultfd.h" : YES 00:01:57.549 Has header "linux/vduse.h" : YES 00:01:57.549 Message: lib/vhost: Defining dependency "vhost" 00:01:57.549 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.549 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.549 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.549 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.549 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.549 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.549 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.549 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.549 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.549 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.549 Program doxygen found: YES (/usr/bin/doxygen) 00:01:57.549 Configuring doxy-api-html.conf using configuration 00:01:57.549 Configuring doxy-api-man.conf using configuration 00:01:57.549 Program mandb found: YES (/usr/bin/mandb) 00:01:57.549 Program sphinx-build found: NO 00:01:57.549 Configuring rte_build_config.h using configuration 00:01:57.549 Message: 00:01:57.549 ================= 00:01:57.549 Applications Enabled 00:01:57.549 ================= 00:01:57.549 00:01:57.549 apps: 00:01:57.549 00:01:57.549 00:01:57.549 Message: 00:01:57.549 ================= 00:01:57.549 Libraries Enabled 00:01:57.549 ================= 00:01:57.549 00:01:57.549 libs: 00:01:57.549 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.549 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.549 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.549 00:01:57.549 Message: 00:01:57.549 =============== 00:01:57.549 Drivers Enabled 00:01:57.549 =============== 00:01:57.549 00:01:57.549 common: 00:01:57.549 00:01:57.549 bus: 00:01:57.549 pci, vdev, 00:01:57.549 mempool: 00:01:57.549 ring, 00:01:57.549 dma: 00:01:57.549 00:01:57.549 net: 00:01:57.549 00:01:57.549 crypto: 00:01:57.549 00:01:57.549 compress: 00:01:57.549 00:01:57.549 vdpa: 00:01:57.549 00:01:57.549 00:01:57.549 Message: 00:01:57.549 ================= 00:01:57.549 Content Skipped 00:01:57.549 ================= 00:01:57.549 00:01:57.549 apps: 00:01:57.549 dumpcap: explicitly disabled via build config 00:01:57.549 graph: explicitly disabled via build config 00:01:57.549 pdump: explicitly disabled via build config 00:01:57.549 proc-info: explicitly disabled via build config 00:01:57.550 test-acl: explicitly disabled via build config 00:01:57.550 test-bbdev: explicitly disabled via build config 00:01:57.550 test-cmdline: explicitly disabled via build config 00:01:57.550 test-compress-perf: explicitly disabled via build config 00:01:57.550 test-crypto-perf: explicitly disabled via build config 00:01:57.550 test-dma-perf: explicitly disabled via build config 00:01:57.550 test-eventdev: explicitly disabled via build config 00:01:57.550 test-fib: explicitly disabled via build config 00:01:57.550 test-flow-perf: explicitly disabled via build config 00:01:57.550 test-gpudev: explicitly disabled via build config 00:01:57.550 test-mldev: explicitly disabled via build config 00:01:57.550 test-pipeline: explicitly disabled via build config 00:01:57.550 test-pmd: explicitly disabled via build config 00:01:57.550 test-regex: explicitly disabled via build config 00:01:57.550 test-sad: explicitly disabled via build config 00:01:57.550 test-security-perf: explicitly disabled via build config 00:01:57.550 00:01:57.550 libs: 00:01:57.550 metrics: explicitly disabled via build config 00:01:57.550 acl: explicitly disabled via build config 00:01:57.550 bbdev: explicitly disabled via build config 00:01:57.550 bitratestats: explicitly disabled via build config 00:01:57.550 bpf: explicitly disabled via build config 00:01:57.550 cfgfile: explicitly disabled via build config 00:01:57.550 distributor: explicitly disabled via build config 00:01:57.550 efd: explicitly disabled via build config 00:01:57.550 eventdev: explicitly disabled via build config 00:01:57.550 dispatcher: explicitly disabled via build config 00:01:57.550 gpudev: explicitly disabled via build config 00:01:57.550 gro: explicitly disabled via build config 00:01:57.550 gso: explicitly disabled via build config 00:01:57.550 ip_frag: explicitly disabled via build config 00:01:57.550 jobstats: explicitly disabled via build config 00:01:57.550 latencystats: explicitly disabled via build config 00:01:57.550 lpm: explicitly disabled via build config 00:01:57.550 member: explicitly disabled via build config 00:01:57.550 pcapng: explicitly disabled via build config 00:01:57.550 rawdev: explicitly disabled via build config 00:01:57.550 regexdev: explicitly disabled via build config 00:01:57.550 mldev: explicitly disabled via build config 00:01:57.550 rib: explicitly disabled via build config 00:01:57.550 sched: explicitly disabled via build config 00:01:57.550 stack: explicitly disabled via build config 00:01:57.550 ipsec: explicitly disabled via build config 00:01:57.550 pdcp: explicitly disabled via build config 00:01:57.550 fib: explicitly disabled via build config 00:01:57.550 port: explicitly disabled via build config 00:01:57.550 pdump: explicitly disabled via build config 00:01:57.550 table: explicitly disabled via build config 00:01:57.550 pipeline: explicitly disabled via build config 00:01:57.550 graph: explicitly disabled via build config 00:01:57.550 node: explicitly disabled via build config 00:01:57.550 00:01:57.550 drivers: 00:01:57.550 common/cpt: not in enabled drivers build config 00:01:57.550 common/dpaax: not in enabled drivers build config 00:01:57.550 common/iavf: not in enabled drivers build config 00:01:57.550 common/idpf: not in enabled drivers build config 00:01:57.550 common/mvep: not in enabled drivers build config 00:01:57.550 common/octeontx: not in enabled drivers build config 00:01:57.550 bus/auxiliary: not in enabled drivers build config 00:01:57.550 bus/cdx: not in enabled drivers build config 00:01:57.550 bus/dpaa: not in enabled drivers build config 00:01:57.550 bus/fslmc: not in enabled drivers build config 00:01:57.550 bus/ifpga: not in enabled drivers build config 00:01:57.550 bus/platform: not in enabled drivers build config 00:01:57.550 bus/vmbus: not in enabled drivers build config 00:01:57.550 common/cnxk: not in enabled drivers build config 00:01:57.550 common/mlx5: not in enabled drivers build config 00:01:57.550 common/nfp: not in enabled drivers build config 00:01:57.550 common/qat: not in enabled drivers build config 00:01:57.550 common/sfc_efx: not in enabled drivers build config 00:01:57.550 mempool/bucket: not in enabled drivers build config 00:01:57.550 mempool/cnxk: not in enabled drivers build config 00:01:57.550 mempool/dpaa: not in enabled drivers build config 00:01:57.550 mempool/dpaa2: not in enabled drivers build config 00:01:57.550 mempool/octeontx: not in enabled drivers build config 00:01:57.550 mempool/stack: not in enabled drivers build config 00:01:57.550 dma/cnxk: not in enabled drivers build config 00:01:57.550 dma/dpaa: not in enabled drivers build config 00:01:57.550 dma/dpaa2: not in enabled drivers build config 00:01:57.550 dma/hisilicon: not in enabled drivers build config 00:01:57.550 dma/idxd: not in enabled drivers build config 00:01:57.550 dma/ioat: not in enabled drivers build config 00:01:57.550 dma/skeleton: not in enabled drivers build config 00:01:57.550 net/af_packet: not in enabled drivers build config 00:01:57.550 net/af_xdp: not in enabled drivers build config 00:01:57.550 net/ark: not in enabled drivers build config 00:01:57.550 net/atlantic: not in enabled drivers build config 00:01:57.550 net/avp: not in enabled drivers build config 00:01:57.550 net/axgbe: not in enabled drivers build config 00:01:57.550 net/bnx2x: not in enabled drivers build config 00:01:57.550 net/bnxt: not in enabled drivers build config 00:01:57.550 net/bonding: not in enabled drivers build config 00:01:57.550 net/cnxk: not in enabled drivers build config 00:01:57.550 net/cpfl: not in enabled drivers build config 00:01:57.550 net/cxgbe: not in enabled drivers build config 00:01:57.550 net/dpaa: not in enabled drivers build config 00:01:57.550 net/dpaa2: not in enabled drivers build config 00:01:57.550 net/e1000: not in enabled drivers build config 00:01:57.550 net/ena: not in enabled drivers build config 00:01:57.550 net/enetc: not in enabled drivers build config 00:01:57.550 net/enetfec: not in enabled drivers build config 00:01:57.550 net/enic: not in enabled drivers build config 00:01:57.550 net/failsafe: not in enabled drivers build config 00:01:57.550 net/fm10k: not in enabled drivers build config 00:01:57.550 net/gve: not in enabled drivers build config 00:01:57.550 net/hinic: not in enabled drivers build config 00:01:57.550 net/hns3: not in enabled drivers build config 00:01:57.550 net/i40e: not in enabled drivers build config 00:01:57.550 net/iavf: not in enabled drivers build config 00:01:57.550 net/ice: not in enabled drivers build config 00:01:57.550 net/idpf: not in enabled drivers build config 00:01:57.550 net/igc: not in enabled drivers build config 00:01:57.550 net/ionic: not in enabled drivers build config 00:01:57.550 net/ipn3ke: not in enabled drivers build config 00:01:57.550 net/ixgbe: not in enabled drivers build config 00:01:57.550 net/mana: not in enabled drivers build config 00:01:57.550 net/memif: not in enabled drivers build config 00:01:57.550 net/mlx4: not in enabled drivers build config 00:01:57.550 net/mlx5: not in enabled drivers build config 00:01:57.550 net/mvneta: not in enabled drivers build config 00:01:57.550 net/mvpp2: not in enabled drivers build config 00:01:57.550 net/netvsc: not in enabled drivers build config 00:01:57.550 net/nfb: not in enabled drivers build config 00:01:57.550 net/nfp: not in enabled drivers build config 00:01:57.550 net/ngbe: not in enabled drivers build config 00:01:57.550 net/null: not in enabled drivers build config 00:01:57.550 net/octeontx: not in enabled drivers build config 00:01:57.550 net/octeon_ep: not in enabled drivers build config 00:01:57.550 net/pcap: not in enabled drivers build config 00:01:57.550 net/pfe: not in enabled drivers build config 00:01:57.550 net/qede: not in enabled drivers build config 00:01:57.550 net/ring: not in enabled drivers build config 00:01:57.550 net/sfc: not in enabled drivers build config 00:01:57.550 net/softnic: not in enabled drivers build config 00:01:57.550 net/tap: not in enabled drivers build config 00:01:57.550 net/thunderx: not in enabled drivers build config 00:01:57.550 net/txgbe: not in enabled drivers build config 00:01:57.550 net/vdev_netvsc: not in enabled drivers build config 00:01:57.550 net/vhost: not in enabled drivers build config 00:01:57.550 net/virtio: not in enabled drivers build config 00:01:57.550 net/vmxnet3: not in enabled drivers build config 00:01:57.550 raw/*: missing internal dependency, "rawdev" 00:01:57.550 crypto/armv8: not in enabled drivers build config 00:01:57.550 crypto/bcmfs: not in enabled drivers build config 00:01:57.550 crypto/caam_jr: not in enabled drivers build config 00:01:57.550 crypto/ccp: not in enabled drivers build config 00:01:57.550 crypto/cnxk: not in enabled drivers build config 00:01:57.550 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.550 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.550 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.550 crypto/mlx5: not in enabled drivers build config 00:01:57.550 crypto/mvsam: not in enabled drivers build config 00:01:57.550 crypto/nitrox: not in enabled drivers build config 00:01:57.550 crypto/null: not in enabled drivers build config 00:01:57.550 crypto/octeontx: not in enabled drivers build config 00:01:57.550 crypto/openssl: not in enabled drivers build config 00:01:57.550 crypto/scheduler: not in enabled drivers build config 00:01:57.550 crypto/uadk: not in enabled drivers build config 00:01:57.550 crypto/virtio: not in enabled drivers build config 00:01:57.550 compress/isal: not in enabled drivers build config 00:01:57.550 compress/mlx5: not in enabled drivers build config 00:01:57.550 compress/octeontx: not in enabled drivers build config 00:01:57.550 compress/zlib: not in enabled drivers build config 00:01:57.550 regex/*: missing internal dependency, "regexdev" 00:01:57.550 ml/*: missing internal dependency, "mldev" 00:01:57.550 vdpa/ifc: not in enabled drivers build config 00:01:57.550 vdpa/mlx5: not in enabled drivers build config 00:01:57.550 vdpa/nfp: not in enabled drivers build config 00:01:57.550 vdpa/sfc: not in enabled drivers build config 00:01:57.550 event/*: missing internal dependency, "eventdev" 00:01:57.550 baseband/*: missing internal dependency, "bbdev" 00:01:57.550 gpu/*: missing internal dependency, "gpudev" 00:01:57.550 00:01:57.550 00:01:57.550 Build targets in project: 84 00:01:57.550 00:01:57.550 DPDK 23.11.0 00:01:57.550 00:01:57.550 User defined options 00:01:57.550 buildtype : debug 00:01:57.550 default_library : shared 00:01:57.550 libdir : lib 00:01:57.550 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:57.550 b_sanitize : address 00:01:57.550 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:57.550 c_link_args : 00:01:57.550 cpu_instruction_set: native 00:01:57.551 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:57.551 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:57.551 enable_docs : false 00:01:57.551 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:57.551 enable_kmods : false 00:01:57.551 tests : false 00:01:57.551 00:01:57.551 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.809 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:01:58.080 [1/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.080 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.080 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.080 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.080 [5/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.080 [6/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.080 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.080 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.080 [9/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.080 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.081 [11/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:58.081 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.340 [13/264] Linking static target lib/librte_kvargs.a 00:01:58.340 [14/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.340 [15/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.340 [16/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:58.340 [17/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.340 [18/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.340 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.340 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.340 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.340 [22/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.340 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.340 [24/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.340 [25/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.340 [26/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:58.340 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.340 [28/264] Linking static target lib/librte_pci.a 00:01:58.340 [29/264] Linking static target lib/librte_log.a 00:01:58.340 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.340 [31/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.340 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.340 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.340 [34/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.340 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.340 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.599 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.599 [38/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.599 [39/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.599 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.599 [41/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.599 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.599 [43/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.599 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.599 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.599 [46/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.599 [47/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.599 [48/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.599 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.599 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.599 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.599 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.599 [53/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.599 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.599 [55/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.599 [56/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.599 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.599 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.599 [59/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.599 [60/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:58.599 [61/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.599 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.599 [63/264] Linking static target lib/librte_meter.a 00:01:58.599 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.599 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.599 [66/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.599 [67/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.599 [68/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.599 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.599 [70/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.599 [71/264] Linking static target lib/librte_telemetry.a 00:01:58.599 [72/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.600 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.600 [74/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.600 [75/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.600 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.600 [77/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:58.600 [78/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.600 [79/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.600 [80/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.600 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.600 [82/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.600 [83/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.600 [84/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.858 [85/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.858 [86/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.858 [87/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.858 [88/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.858 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.858 [90/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.858 [91/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.858 [92/264] Linking static target lib/librte_ring.a 00:01:58.859 [93/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.859 [94/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.859 [95/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:58.859 [96/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.859 [97/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.859 [98/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.859 [99/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.859 [100/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.859 [101/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.859 [102/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.859 [103/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.859 [104/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.859 [105/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.859 [106/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.859 [107/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.859 [108/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.859 [109/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.859 [110/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.859 [111/264] Linking static target lib/librte_dmadev.a 00:01:58.859 [112/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.859 [113/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.859 [114/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.859 [115/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.859 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.859 [117/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.859 [118/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.859 [119/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.859 [120/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.859 [121/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.859 [122/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.859 [123/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.859 [124/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.859 [125/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.859 [126/264] Linking static target lib/librte_cmdline.a 00:01:58.859 [127/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.859 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.859 [129/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.859 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.859 [131/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.859 [132/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.859 [133/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.859 [134/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.859 [135/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.859 [136/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.859 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.859 [138/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:58.859 [139/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.859 [140/264] Linking static target lib/librte_timer.a 00:01:58.859 [141/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.859 [142/264] Linking target lib/librte_log.so.24.0 00:01:58.859 [143/264] Linking static target lib/librte_net.a 00:01:58.859 [144/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.859 [145/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:58.859 [146/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:58.859 [147/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.859 [148/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.859 [149/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.859 [150/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.859 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.859 [152/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.859 [153/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.859 [154/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.859 [155/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.859 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.859 [157/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:58.859 [158/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.859 [159/264] Linking static target lib/librte_reorder.a 00:01:58.859 [160/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:59.117 [161/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:59.117 [162/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:59.117 [163/264] Linking static target lib/librte_power.a 00:01:59.117 [164/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.117 [165/264] Linking static target lib/librte_compressdev.a 00:01:59.117 [166/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.117 [167/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:59.117 [168/264] Linking static target lib/librte_eal.a 00:01:59.117 [169/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:59.117 [170/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.117 [171/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.117 [172/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.117 [173/264] Linking static target lib/librte_security.a 00:01:59.117 [174/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:59.117 [175/264] Linking static target lib/librte_mempool.a 00:01:59.117 [176/264] Linking target lib/librte_kvargs.so.24.0 00:01:59.117 [177/264] Linking target lib/librte_telemetry.so.24.0 00:01:59.117 [178/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.117 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.117 [180/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.117 [181/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.117 [182/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.117 [183/264] Linking static target drivers/librte_bus_vdev.a 00:01:59.117 [184/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.117 [185/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.117 [186/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.117 [187/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.117 [188/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:59.117 [189/264] Linking static target lib/librte_rcu.a 00:01:59.117 [190/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:59.117 [191/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:59.117 [192/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.117 [193/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.117 [194/264] Linking static target lib/librte_mbuf.a 00:01:59.117 [195/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.117 [196/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.117 [197/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.117 [198/264] Linking static target drivers/librte_bus_pci.a 00:01:59.117 [199/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.376 [200/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.376 [201/264] Linking static target lib/librte_hash.a 00:01:59.376 [202/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.376 [203/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.376 [204/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.376 [205/264] Linking static target drivers/librte_mempool_ring.a 00:01:59.376 [206/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.376 [207/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.376 [208/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.376 [209/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.376 [210/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.376 [211/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.376 [212/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.634 [213/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.634 [214/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.634 [215/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.634 [216/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.634 [217/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.634 [218/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.634 [219/264] Linking static target lib/librte_cryptodev.a 00:01:59.634 [220/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.201 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:00.201 [222/264] Linking static target lib/librte_ethdev.a 00:02:00.767 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.056 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.954 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.954 [226/264] Linking static target lib/librte_vhost.a 00:02:04.345 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.714 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.714 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.714 [230/264] Linking target lib/librte_eal.so.24.0 00:02:05.714 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:05.714 [232/264] Linking target lib/librte_ring.so.24.0 00:02:05.714 [233/264] Linking target lib/librte_pci.so.24.0 00:02:05.714 [234/264] Linking target lib/librte_dmadev.so.24.0 00:02:05.714 [235/264] Linking target lib/librte_meter.so.24.0 00:02:05.714 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:05.714 [237/264] Linking target lib/librte_timer.so.24.0 00:02:05.714 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:05.714 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:05.714 [240/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:05.714 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:05.714 [242/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:05.714 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:05.714 [244/264] Linking target lib/librte_rcu.so.24.0 00:02:05.714 [245/264] Linking target lib/librte_mempool.so.24.0 00:02:05.973 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:05.973 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:05.973 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:05.973 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:05.973 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:05.973 [251/264] Linking target lib/librte_compressdev.so.24.0 00:02:05.973 [252/264] Linking target lib/librte_reorder.so.24.0 00:02:05.973 [253/264] Linking target lib/librte_net.so.24.0 00:02:05.973 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:06.230 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:06.230 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:06.230 [257/264] Linking target lib/librte_security.so.24.0 00:02:06.230 [258/264] Linking target lib/librte_hash.so.24.0 00:02:06.230 [259/264] Linking target lib/librte_cmdline.so.24.0 00:02:06.230 [260/264] Linking target lib/librte_ethdev.so.24.0 00:02:06.230 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:06.230 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:06.230 [263/264] Linking target lib/librte_power.so.24.0 00:02:06.230 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:06.230 INFO: autodetecting backend as ninja 00:02:06.230 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:02:06.794 CC lib/ut_mock/mock.o 00:02:06.794 CC lib/log/log.o 00:02:06.794 CC lib/log/log_flags.o 00:02:06.794 CC lib/log/log_deprecated.o 00:02:07.052 CC lib/ut/ut.o 00:02:07.052 LIB libspdk_ut_mock.a 00:02:07.052 SO libspdk_ut_mock.so.5.0 00:02:07.052 LIB libspdk_log.a 00:02:07.052 LIB libspdk_ut.a 00:02:07.052 SO libspdk_ut.so.1.0 00:02:07.052 SO libspdk_log.so.6.1 00:02:07.052 SYMLINK libspdk_ut_mock.so 00:02:07.052 SYMLINK libspdk_log.so 00:02:07.052 SYMLINK libspdk_ut.so 00:02:07.310 CC lib/util/cpuset.o 00:02:07.310 CC lib/util/base64.o 00:02:07.310 CC lib/util/crc16.o 00:02:07.310 CC lib/util/bit_array.o 00:02:07.310 CC lib/util/crc32.o 00:02:07.310 CC lib/util/crc32c.o 00:02:07.310 CC lib/util/crc32_ieee.o 00:02:07.310 CC lib/util/crc64.o 00:02:07.310 CC lib/util/fd.o 00:02:07.310 CC lib/util/dif.o 00:02:07.310 CC lib/util/file.o 00:02:07.310 CC lib/util/iov.o 00:02:07.310 CC lib/dma/dma.o 00:02:07.310 CC lib/util/pipe.o 00:02:07.310 CC lib/ioat/ioat.o 00:02:07.310 CC lib/util/hexlify.o 00:02:07.310 CC lib/util/math.o 00:02:07.310 CXX lib/trace_parser/trace.o 00:02:07.310 CC lib/util/strerror_tls.o 00:02:07.310 CC lib/util/string.o 00:02:07.310 CC lib/util/uuid.o 00:02:07.310 CC lib/util/fd_group.o 00:02:07.310 CC lib/util/zipf.o 00:02:07.310 CC lib/util/xor.o 00:02:07.310 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.310 CC lib/vfio_user/host/vfio_user.o 00:02:07.310 LIB libspdk_dma.a 00:02:07.310 SO libspdk_dma.so.3.0 00:02:07.567 SYMLINK libspdk_dma.so 00:02:07.567 LIB libspdk_ioat.a 00:02:07.567 LIB libspdk_vfio_user.a 00:02:07.567 SO libspdk_ioat.so.6.0 00:02:07.567 SO libspdk_vfio_user.so.4.0 00:02:07.567 SYMLINK libspdk_ioat.so 00:02:07.567 SYMLINK libspdk_vfio_user.so 00:02:07.567 LIB libspdk_util.a 00:02:07.826 SO libspdk_util.so.8.0 00:02:07.826 SYMLINK libspdk_util.so 00:02:07.826 LIB libspdk_trace_parser.a 00:02:07.826 SO libspdk_trace_parser.so.4.0 00:02:07.826 CC lib/idxd/idxd.o 00:02:07.826 CC lib/idxd/idxd_user.o 00:02:07.826 CC lib/env_dpdk/pci.o 00:02:07.826 CC lib/env_dpdk/env.o 00:02:07.826 CC lib/env_dpdk/memory.o 00:02:07.826 CC lib/env_dpdk/threads.o 00:02:07.826 CC lib/env_dpdk/pci_virtio.o 00:02:08.084 CC lib/env_dpdk/init.o 00:02:08.084 CC lib/env_dpdk/pci_vmd.o 00:02:08.084 CC lib/env_dpdk/pci_idxd.o 00:02:08.084 CC lib/env_dpdk/pci_ioat.o 00:02:08.084 CC lib/rdma/common.o 00:02:08.084 CC lib/env_dpdk/sigbus_handler.o 00:02:08.084 CC lib/rdma/rdma_verbs.o 00:02:08.084 CC lib/env_dpdk/pci_dpdk.o 00:02:08.084 CC lib/env_dpdk/pci_event.o 00:02:08.084 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.084 CC lib/vmd/vmd.o 00:02:08.084 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.084 CC lib/json/json_util.o 00:02:08.084 CC lib/json/json_parse.o 00:02:08.084 CC lib/vmd/led.o 00:02:08.084 CC lib/json/json_write.o 00:02:08.084 CC lib/conf/conf.o 00:02:08.084 SYMLINK libspdk_trace_parser.so 00:02:08.084 LIB libspdk_conf.a 00:02:08.084 SO libspdk_conf.so.5.0 00:02:08.342 SYMLINK libspdk_conf.so 00:02:08.342 LIB libspdk_rdma.a 00:02:08.342 SO libspdk_rdma.so.5.0 00:02:08.342 LIB libspdk_json.a 00:02:08.342 SO libspdk_json.so.5.1 00:02:08.342 SYMLINK libspdk_rdma.so 00:02:08.342 SYMLINK libspdk_json.so 00:02:08.609 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.609 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.609 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.609 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.609 LIB libspdk_idxd.a 00:02:08.609 SO libspdk_idxd.so.11.0 00:02:08.609 SYMLINK libspdk_idxd.so 00:02:08.609 LIB libspdk_vmd.a 00:02:08.609 SO libspdk_vmd.so.5.0 00:02:08.609 SYMLINK libspdk_vmd.so 00:02:08.867 LIB libspdk_jsonrpc.a 00:02:08.867 SO libspdk_jsonrpc.so.5.1 00:02:08.867 SYMLINK libspdk_jsonrpc.so 00:02:08.867 LIB libspdk_env_dpdk.a 00:02:08.867 CC lib/rpc/rpc.o 00:02:08.867 SO libspdk_env_dpdk.so.13.0 00:02:09.125 SYMLINK libspdk_env_dpdk.so 00:02:09.125 LIB libspdk_rpc.a 00:02:09.125 SO libspdk_rpc.so.5.0 00:02:09.125 SYMLINK libspdk_rpc.so 00:02:09.384 CC lib/sock/sock.o 00:02:09.384 CC lib/sock/sock_rpc.o 00:02:09.384 CC lib/notify/notify.o 00:02:09.384 CC lib/notify/notify_rpc.o 00:02:09.384 CC lib/trace/trace.o 00:02:09.384 CC lib/trace/trace_flags.o 00:02:09.384 CC lib/trace/trace_rpc.o 00:02:09.384 LIB libspdk_notify.a 00:02:09.384 SO libspdk_notify.so.5.0 00:02:09.384 LIB libspdk_trace.a 00:02:09.643 SYMLINK libspdk_notify.so 00:02:09.643 SO libspdk_trace.so.9.0 00:02:09.643 LIB libspdk_sock.a 00:02:09.643 SO libspdk_sock.so.8.0 00:02:09.643 SYMLINK libspdk_trace.so 00:02:09.643 SYMLINK libspdk_sock.so 00:02:09.643 CC lib/thread/thread.o 00:02:09.643 CC lib/thread/iobuf.o 00:02:09.643 CC lib/nvme/nvme_fabric.o 00:02:09.643 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:09.643 CC lib/nvme/nvme_ctrlr.o 00:02:09.643 CC lib/nvme/nvme_pcie_common.o 00:02:09.643 CC lib/nvme/nvme_ns_cmd.o 00:02:09.643 CC lib/nvme/nvme_ns.o 00:02:09.643 CC lib/nvme/nvme_pcie.o 00:02:09.643 CC lib/nvme/nvme_qpair.o 00:02:09.643 CC lib/nvme/nvme_quirks.o 00:02:09.643 CC lib/nvme/nvme.o 00:02:09.643 CC lib/nvme/nvme_transport.o 00:02:09.643 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:09.643 CC lib/nvme/nvme_discovery.o 00:02:09.643 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:09.643 CC lib/nvme/nvme_tcp.o 00:02:09.643 CC lib/nvme/nvme_io_msg.o 00:02:09.643 CC lib/nvme/nvme_opal.o 00:02:09.643 CC lib/nvme/nvme_zns.o 00:02:09.643 CC lib/nvme/nvme_poll_group.o 00:02:09.643 CC lib/nvme/nvme_cuse.o 00:02:09.643 CC lib/nvme/nvme_vfio_user.o 00:02:09.643 CC lib/nvme/nvme_rdma.o 00:02:11.021 LIB libspdk_thread.a 00:02:11.021 SO libspdk_thread.so.9.0 00:02:11.021 SYMLINK libspdk_thread.so 00:02:11.021 CC lib/blob/blobstore.o 00:02:11.021 CC lib/blob/request.o 00:02:11.021 CC lib/blob/blob_bs_dev.o 00:02:11.021 CC lib/blob/zeroes.o 00:02:11.021 CC lib/virtio/virtio_vfio_user.o 00:02:11.021 CC lib/virtio/virtio.o 00:02:11.021 CC lib/virtio/virtio_vhost_user.o 00:02:11.021 CC lib/virtio/virtio_pci.o 00:02:11.021 CC lib/init/subsystem.o 00:02:11.021 CC lib/init/rpc.o 00:02:11.021 CC lib/init/subsystem_rpc.o 00:02:11.021 CC lib/init/json_config.o 00:02:11.021 CC lib/accel/accel.o 00:02:11.021 CC lib/accel/accel_rpc.o 00:02:11.021 CC lib/accel/accel_sw.o 00:02:11.280 LIB libspdk_init.a 00:02:11.280 SO libspdk_init.so.4.0 00:02:11.539 SYMLINK libspdk_init.so 00:02:11.539 LIB libspdk_virtio.a 00:02:11.539 SO libspdk_virtio.so.6.0 00:02:11.539 SYMLINK libspdk_virtio.so 00:02:11.539 CC lib/event/app.o 00:02:11.539 CC lib/event/reactor.o 00:02:11.539 CC lib/event/log_rpc.o 00:02:11.539 CC lib/event/app_rpc.o 00:02:11.539 CC lib/event/scheduler_static.o 00:02:12.105 LIB libspdk_event.a 00:02:12.105 LIB libspdk_nvme.a 00:02:12.105 SO libspdk_event.so.12.0 00:02:12.105 SYMLINK libspdk_event.so 00:02:12.105 SO libspdk_nvme.so.12.0 00:02:12.364 LIB libspdk_accel.a 00:02:12.364 SO libspdk_accel.so.14.0 00:02:12.364 SYMLINK libspdk_accel.so 00:02:12.364 SYMLINK libspdk_nvme.so 00:02:12.623 CC lib/bdev/bdev_rpc.o 00:02:12.623 CC lib/bdev/bdev.o 00:02:12.623 CC lib/bdev/part.o 00:02:12.623 CC lib/bdev/bdev_zone.o 00:02:12.623 CC lib/bdev/scsi_nvme.o 00:02:13.193 LIB libspdk_blob.a 00:02:13.193 SO libspdk_blob.so.10.1 00:02:13.453 SYMLINK libspdk_blob.so 00:02:13.711 CC lib/lvol/lvol.o 00:02:13.711 CC lib/blobfs/blobfs.o 00:02:13.711 CC lib/blobfs/tree.o 00:02:14.647 LIB libspdk_blobfs.a 00:02:14.647 SO libspdk_blobfs.so.9.0 00:02:14.647 LIB libspdk_lvol.a 00:02:14.647 SO libspdk_lvol.so.9.1 00:02:14.647 SYMLINK libspdk_blobfs.so 00:02:14.647 SYMLINK libspdk_lvol.so 00:02:14.647 LIB libspdk_bdev.a 00:02:14.905 SO libspdk_bdev.so.14.0 00:02:14.905 SYMLINK libspdk_bdev.so 00:02:15.163 CC lib/scsi/dev.o 00:02:15.163 CC lib/scsi/port.o 00:02:15.163 CC lib/scsi/scsi_bdev.o 00:02:15.163 CC lib/scsi/lun.o 00:02:15.163 CC lib/scsi/scsi_pr.o 00:02:15.163 CC lib/scsi/scsi.o 00:02:15.163 CC lib/scsi/scsi_rpc.o 00:02:15.163 CC lib/ftl/ftl_layout.o 00:02:15.163 CC lib/ftl/ftl_core.o 00:02:15.163 CC lib/scsi/task.o 00:02:15.163 CC lib/ublk/ublk.o 00:02:15.163 CC lib/ftl/ftl_init.o 00:02:15.163 CC lib/ftl/ftl_sb.o 00:02:15.163 CC lib/nbd/nbd.o 00:02:15.163 CC lib/ftl/ftl_io.o 00:02:15.163 CC lib/ublk/ublk_rpc.o 00:02:15.163 CC lib/ftl/ftl_debug.o 00:02:15.163 CC lib/nbd/nbd_rpc.o 00:02:15.163 CC lib/ftl/ftl_l2p.o 00:02:15.163 CC lib/ftl/ftl_band.o 00:02:15.163 CC lib/ftl/ftl_l2p_flat.o 00:02:15.163 CC lib/ftl/ftl_nv_cache.o 00:02:15.163 CC lib/ftl/ftl_writer.o 00:02:15.163 CC lib/ftl/ftl_band_ops.o 00:02:15.163 CC lib/ftl/ftl_rq.o 00:02:15.163 CC lib/nvmf/ctrlr_bdev.o 00:02:15.163 CC lib/nvmf/ctrlr.o 00:02:15.163 CC lib/ftl/ftl_l2p_cache.o 00:02:15.163 CC lib/nvmf/nvmf.o 00:02:15.163 CC lib/ftl/ftl_p2l.o 00:02:15.163 CC lib/nvmf/ctrlr_discovery.o 00:02:15.163 CC lib/ftl/ftl_reloc.o 00:02:15.163 CC lib/nvmf/subsystem.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.163 CC lib/nvmf/transport.o 00:02:15.163 CC lib/nvmf/nvmf_rpc.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.163 CC lib/nvmf/tcp.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.163 CC lib/nvmf/rdma.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.163 CC lib/ftl/utils/ftl_conf.o 00:02:15.163 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.163 CC lib/ftl/utils/ftl_mempool.o 00:02:15.163 CC lib/ftl/utils/ftl_md.o 00:02:15.163 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.163 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.163 CC lib/ftl/utils/ftl_property.o 00:02:15.163 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.163 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.163 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.163 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.163 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.163 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.163 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.163 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.163 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.163 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.163 CC lib/ftl/ftl_trace.o 00:02:15.163 CC lib/ftl/base/ftl_base_dev.o 00:02:15.732 LIB libspdk_nbd.a 00:02:15.732 SO libspdk_nbd.so.6.0 00:02:15.732 SYMLINK libspdk_nbd.so 00:02:15.732 LIB libspdk_scsi.a 00:02:15.732 SO libspdk_scsi.so.8.0 00:02:15.732 LIB libspdk_ublk.a 00:02:15.732 SO libspdk_ublk.so.2.0 00:02:15.732 SYMLINK libspdk_scsi.so 00:02:15.990 SYMLINK libspdk_ublk.so 00:02:15.990 CC lib/vhost/vhost_rpc.o 00:02:15.990 CC lib/vhost/vhost.o 00:02:15.990 CC lib/vhost/vhost_blk.o 00:02:15.990 CC lib/vhost/vhost_scsi.o 00:02:15.990 CC lib/vhost/rte_vhost_user.o 00:02:15.990 CC lib/iscsi/init_grp.o 00:02:15.990 CC lib/iscsi/iscsi.o 00:02:15.990 CC lib/iscsi/md5.o 00:02:15.990 CC lib/iscsi/conn.o 00:02:15.990 CC lib/iscsi/tgt_node.o 00:02:15.990 CC lib/iscsi/param.o 00:02:15.990 CC lib/iscsi/portal_grp.o 00:02:15.990 CC lib/iscsi/iscsi_rpc.o 00:02:15.990 CC lib/iscsi/iscsi_subsystem.o 00:02:15.990 CC lib/iscsi/task.o 00:02:15.990 LIB libspdk_ftl.a 00:02:16.249 SO libspdk_ftl.so.8.0 00:02:16.508 SYMLINK libspdk_ftl.so 00:02:17.075 LIB libspdk_vhost.a 00:02:17.075 LIB libspdk_iscsi.a 00:02:17.075 SO libspdk_iscsi.so.7.0 00:02:17.075 SO libspdk_vhost.so.7.1 00:02:17.075 LIB libspdk_nvmf.a 00:02:17.075 SO libspdk_nvmf.so.17.0 00:02:17.075 SYMLINK libspdk_vhost.so 00:02:17.075 SYMLINK libspdk_iscsi.so 00:02:17.332 SYMLINK libspdk_nvmf.so 00:02:17.332 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.591 CC module/sock/posix/posix.o 00:02:17.591 CC module/scheduler/gscheduler/gscheduler.o 00:02:17.591 CC module/accel/iaa/accel_iaa_rpc.o 00:02:17.591 CC module/accel/iaa/accel_iaa.o 00:02:17.591 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:17.591 CC module/accel/error/accel_error.o 00:02:17.591 CC module/accel/error/accel_error_rpc.o 00:02:17.591 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:17.591 CC module/accel/dsa/accel_dsa.o 00:02:17.591 CC module/blob/bdev/blob_bdev.o 00:02:17.591 CC module/accel/dsa/accel_dsa_rpc.o 00:02:17.591 CC module/accel/ioat/accel_ioat.o 00:02:17.591 CC module/accel/ioat/accel_ioat_rpc.o 00:02:17.591 LIB libspdk_env_dpdk_rpc.a 00:02:17.591 SO libspdk_env_dpdk_rpc.so.5.0 00:02:17.591 SYMLINK libspdk_env_dpdk_rpc.so 00:02:17.591 LIB libspdk_scheduler_dynamic.a 00:02:17.591 LIB libspdk_accel_error.a 00:02:17.591 LIB libspdk_scheduler_gscheduler.a 00:02:17.591 LIB libspdk_scheduler_dpdk_governor.a 00:02:17.591 SO libspdk_scheduler_dynamic.so.3.0 00:02:17.591 SO libspdk_accel_error.so.1.0 00:02:17.591 SO libspdk_scheduler_gscheduler.so.3.0 00:02:17.591 LIB libspdk_accel_dsa.a 00:02:17.591 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:17.591 LIB libspdk_blob_bdev.a 00:02:17.591 SYMLINK libspdk_scheduler_dynamic.so 00:02:17.591 LIB libspdk_accel_ioat.a 00:02:17.591 SO libspdk_accel_dsa.so.4.0 00:02:17.591 SYMLINK libspdk_scheduler_gscheduler.so 00:02:17.591 LIB libspdk_accel_iaa.a 00:02:17.850 SO libspdk_blob_bdev.so.10.1 00:02:17.850 SYMLINK libspdk_accel_error.so 00:02:17.850 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:17.850 SO libspdk_accel_ioat.so.5.0 00:02:17.850 SO libspdk_accel_iaa.so.2.0 00:02:17.850 SYMLINK libspdk_accel_dsa.so 00:02:17.850 SYMLINK libspdk_blob_bdev.so 00:02:17.850 SYMLINK libspdk_accel_ioat.so 00:02:17.850 SYMLINK libspdk_accel_iaa.so 00:02:17.850 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:17.850 CC module/bdev/iscsi/bdev_iscsi.o 00:02:17.850 CC module/bdev/null/bdev_null.o 00:02:17.850 CC module/bdev/raid/bdev_raid.o 00:02:17.850 CC module/bdev/null/bdev_null_rpc.o 00:02:17.850 CC module/bdev/nvme/bdev_nvme.o 00:02:17.850 CC module/bdev/raid/bdev_raid_rpc.o 00:02:17.850 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:17.850 CC module/bdev/nvme/nvme_rpc.o 00:02:17.850 CC module/bdev/raid/raid1.o 00:02:17.850 CC module/bdev/raid/raid0.o 00:02:17.850 CC module/bdev/nvme/vbdev_opal.o 00:02:17.850 CC module/bdev/raid/bdev_raid_sb.o 00:02:17.850 CC module/bdev/nvme/bdev_mdns_client.o 00:02:17.850 CC module/bdev/error/vbdev_error.o 00:02:17.850 CC module/bdev/error/vbdev_error_rpc.o 00:02:17.850 CC module/bdev/raid/concat.o 00:02:17.850 CC module/bdev/delay/vbdev_delay.o 00:02:17.850 CC module/bdev/aio/bdev_aio.o 00:02:17.850 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:17.850 CC module/bdev/malloc/bdev_malloc.o 00:02:17.850 CC module/bdev/split/vbdev_split_rpc.o 00:02:17.850 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:17.850 CC module/bdev/gpt/gpt.o 00:02:17.850 CC module/bdev/aio/bdev_aio_rpc.o 00:02:17.850 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:17.850 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:17.850 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:17.850 CC module/bdev/gpt/vbdev_gpt.o 00:02:17.850 CC module/bdev/split/vbdev_split.o 00:02:17.850 CC module/bdev/ftl/bdev_ftl.o 00:02:17.850 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:17.850 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:17.850 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.850 CC module/bdev/lvol/vbdev_lvol.o 00:02:17.850 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:17.850 CC module/bdev/passthru/vbdev_passthru.o 00:02:17.850 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:17.850 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.108 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.108 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.108 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.108 LIB libspdk_sock_posix.a 00:02:18.108 SO libspdk_sock_posix.so.5.0 00:02:18.108 SYMLINK libspdk_sock_posix.so 00:02:18.108 LIB libspdk_blobfs_bdev.a 00:02:18.108 SO libspdk_blobfs_bdev.so.5.0 00:02:18.108 LIB libspdk_bdev_error.a 00:02:18.366 LIB libspdk_bdev_gpt.a 00:02:18.366 SO libspdk_bdev_error.so.5.0 00:02:18.366 SYMLINK libspdk_blobfs_bdev.so 00:02:18.366 SO libspdk_bdev_gpt.so.5.0 00:02:18.366 LIB libspdk_bdev_split.a 00:02:18.366 LIB libspdk_bdev_zone_block.a 00:02:18.366 SO libspdk_bdev_split.so.5.0 00:02:18.366 SYMLINK libspdk_bdev_error.so 00:02:18.366 LIB libspdk_bdev_null.a 00:02:18.366 SYMLINK libspdk_bdev_gpt.so 00:02:18.366 SO libspdk_bdev_zone_block.so.5.0 00:02:18.366 LIB libspdk_bdev_iscsi.a 00:02:18.366 SO libspdk_bdev_null.so.5.0 00:02:18.366 LIB libspdk_bdev_ftl.a 00:02:18.366 LIB libspdk_bdev_passthru.a 00:02:18.366 SO libspdk_bdev_iscsi.so.5.0 00:02:18.366 SYMLINK libspdk_bdev_split.so 00:02:18.366 LIB libspdk_bdev_aio.a 00:02:18.366 SO libspdk_bdev_ftl.so.5.0 00:02:18.366 SO libspdk_bdev_passthru.so.5.0 00:02:18.366 SYMLINK libspdk_bdev_zone_block.so 00:02:18.366 LIB libspdk_bdev_delay.a 00:02:18.366 SO libspdk_bdev_aio.so.5.0 00:02:18.366 SYMLINK libspdk_bdev_null.so 00:02:18.366 SYMLINK libspdk_bdev_iscsi.so 00:02:18.366 SO libspdk_bdev_delay.so.5.0 00:02:18.366 SYMLINK libspdk_bdev_ftl.so 00:02:18.366 SYMLINK libspdk_bdev_aio.so 00:02:18.366 LIB libspdk_bdev_lvol.a 00:02:18.366 LIB libspdk_bdev_virtio.a 00:02:18.366 SYMLINK libspdk_bdev_passthru.so 00:02:18.366 LIB libspdk_bdev_malloc.a 00:02:18.366 SYMLINK libspdk_bdev_delay.so 00:02:18.366 SO libspdk_bdev_lvol.so.5.0 00:02:18.366 SO libspdk_bdev_virtio.so.5.0 00:02:18.663 SO libspdk_bdev_malloc.so.5.0 00:02:18.663 SYMLINK libspdk_bdev_lvol.so 00:02:18.663 SYMLINK libspdk_bdev_virtio.so 00:02:18.663 SYMLINK libspdk_bdev_malloc.so 00:02:18.663 LIB libspdk_bdev_raid.a 00:02:18.663 SO libspdk_bdev_raid.so.5.0 00:02:18.955 SYMLINK libspdk_bdev_raid.so 00:02:19.522 LIB libspdk_bdev_nvme.a 00:02:19.522 SO libspdk_bdev_nvme.so.6.0 00:02:19.780 SYMLINK libspdk_bdev_nvme.so 00:02:20.038 CC module/event/subsystems/sock/sock.o 00:02:20.038 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.038 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.038 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.038 CC module/event/subsystems/vmd/vmd.o 00:02:20.038 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.038 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.038 LIB libspdk_event_sock.a 00:02:20.038 LIB libspdk_event_vhost_blk.a 00:02:20.038 SO libspdk_event_sock.so.4.0 00:02:20.038 SO libspdk_event_vhost_blk.so.2.0 00:02:20.038 LIB libspdk_event_iobuf.a 00:02:20.038 SYMLINK libspdk_event_sock.so 00:02:20.038 LIB libspdk_event_scheduler.a 00:02:20.038 SO libspdk_event_iobuf.so.2.0 00:02:20.038 LIB libspdk_event_vmd.a 00:02:20.038 SYMLINK libspdk_event_vhost_blk.so 00:02:20.038 SO libspdk_event_scheduler.so.3.0 00:02:20.038 SO libspdk_event_vmd.so.5.0 00:02:20.296 SYMLINK libspdk_event_scheduler.so 00:02:20.296 SYMLINK libspdk_event_iobuf.so 00:02:20.296 SYMLINK libspdk_event_vmd.so 00:02:20.296 CC module/event/subsystems/accel/accel.o 00:02:20.555 LIB libspdk_event_accel.a 00:02:20.555 SO libspdk_event_accel.so.5.0 00:02:20.555 SYMLINK libspdk_event_accel.so 00:02:20.555 CC module/event/subsystems/bdev/bdev.o 00:02:20.812 LIB libspdk_event_bdev.a 00:02:20.812 SO libspdk_event_bdev.so.5.0 00:02:20.812 SYMLINK libspdk_event_bdev.so 00:02:21.070 CC module/event/subsystems/nbd/nbd.o 00:02:21.070 CC module/event/subsystems/scsi/scsi.o 00:02:21.070 CC module/event/subsystems/ublk/ublk.o 00:02:21.070 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:21.070 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.070 LIB libspdk_event_nbd.a 00:02:21.070 SO libspdk_event_nbd.so.5.0 00:02:21.070 LIB libspdk_event_ublk.a 00:02:21.070 LIB libspdk_event_scsi.a 00:02:21.070 SYMLINK libspdk_event_nbd.so 00:02:21.328 SO libspdk_event_ublk.so.2.0 00:02:21.328 SO libspdk_event_scsi.so.5.0 00:02:21.328 SYMLINK libspdk_event_ublk.so 00:02:21.328 LIB libspdk_event_nvmf.a 00:02:21.328 SYMLINK libspdk_event_scsi.so 00:02:21.328 SO libspdk_event_nvmf.so.5.0 00:02:21.328 SYMLINK libspdk_event_nvmf.so 00:02:21.328 CC module/event/subsystems/iscsi/iscsi.o 00:02:21.328 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:21.586 LIB libspdk_event_vhost_scsi.a 00:02:21.586 SO libspdk_event_vhost_scsi.so.2.0 00:02:21.586 LIB libspdk_event_iscsi.a 00:02:21.586 SO libspdk_event_iscsi.so.5.0 00:02:21.586 SYMLINK libspdk_event_vhost_scsi.so 00:02:21.586 SYMLINK libspdk_event_iscsi.so 00:02:21.844 SO libspdk.so.5.0 00:02:21.844 SYMLINK libspdk.so 00:02:21.844 TEST_HEADER include/spdk/accel.h 00:02:21.844 TEST_HEADER include/spdk/accel_module.h 00:02:21.844 TEST_HEADER include/spdk/assert.h 00:02:21.844 TEST_HEADER include/spdk/barrier.h 00:02:21.844 TEST_HEADER include/spdk/base64.h 00:02:21.844 TEST_HEADER include/spdk/bdev.h 00:02:21.844 TEST_HEADER include/spdk/bdev_module.h 00:02:21.844 CC test/rpc_client/rpc_client_test.o 00:02:21.844 TEST_HEADER include/spdk/bit_array.h 00:02:21.844 TEST_HEADER include/spdk/bit_pool.h 00:02:21.844 TEST_HEADER include/spdk/bdev_zone.h 00:02:21.844 TEST_HEADER include/spdk/blob_bdev.h 00:02:21.844 TEST_HEADER include/spdk/blobfs.h 00:02:21.844 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:21.845 TEST_HEADER include/spdk/blob.h 00:02:21.845 TEST_HEADER include/spdk/conf.h 00:02:21.845 TEST_HEADER include/spdk/config.h 00:02:21.845 CC app/spdk_top/spdk_top.o 00:02:21.845 TEST_HEADER include/spdk/cpuset.h 00:02:21.845 TEST_HEADER include/spdk/crc16.h 00:02:21.845 TEST_HEADER include/spdk/crc32.h 00:02:21.845 CC app/trace_record/trace_record.o 00:02:21.845 TEST_HEADER include/spdk/crc64.h 00:02:21.845 TEST_HEADER include/spdk/dif.h 00:02:21.845 TEST_HEADER include/spdk/dma.h 00:02:21.845 TEST_HEADER include/spdk/endian.h 00:02:21.845 CC app/spdk_nvme_discover/discovery_aer.o 00:02:21.845 TEST_HEADER include/spdk/env.h 00:02:21.845 TEST_HEADER include/spdk/env_dpdk.h 00:02:21.845 CC app/spdk_nvme_identify/identify.o 00:02:21.845 TEST_HEADER include/spdk/event.h 00:02:21.845 TEST_HEADER include/spdk/fd_group.h 00:02:21.845 TEST_HEADER include/spdk/fd.h 00:02:21.845 CXX app/trace/trace.o 00:02:21.845 TEST_HEADER include/spdk/file.h 00:02:21.845 TEST_HEADER include/spdk/ftl.h 00:02:21.845 CC app/nvmf_tgt/nvmf_main.o 00:02:21.845 CC app/spdk_lspci/spdk_lspci.o 00:02:21.845 CC app/spdk_nvme_perf/perf.o 00:02:21.845 TEST_HEADER include/spdk/gpt_spec.h 00:02:21.845 TEST_HEADER include/spdk/histogram_data.h 00:02:21.845 TEST_HEADER include/spdk/idxd.h 00:02:21.845 TEST_HEADER include/spdk/idxd_spec.h 00:02:21.845 TEST_HEADER include/spdk/hexlify.h 00:02:21.845 TEST_HEADER include/spdk/init.h 00:02:21.845 TEST_HEADER include/spdk/ioat.h 00:02:21.845 TEST_HEADER include/spdk/iscsi_spec.h 00:02:21.845 TEST_HEADER include/spdk/jsonrpc.h 00:02:21.845 TEST_HEADER include/spdk/ioat_spec.h 00:02:21.845 TEST_HEADER include/spdk/json.h 00:02:21.845 TEST_HEADER include/spdk/likely.h 00:02:21.845 TEST_HEADER include/spdk/log.h 00:02:21.845 TEST_HEADER include/spdk/mmio.h 00:02:21.845 TEST_HEADER include/spdk/memory.h 00:02:22.107 TEST_HEADER include/spdk/lvol.h 00:02:22.107 CC app/spdk_dd/spdk_dd.o 00:02:22.107 TEST_HEADER include/spdk/nbd.h 00:02:22.107 TEST_HEADER include/spdk/nvme.h 00:02:22.107 TEST_HEADER include/spdk/notify.h 00:02:22.107 TEST_HEADER include/spdk/nvme_intel.h 00:02:22.107 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:22.107 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:22.107 TEST_HEADER include/spdk/nvme_spec.h 00:02:22.107 TEST_HEADER include/spdk/nvme_zns.h 00:02:22.107 TEST_HEADER include/spdk/nvmf.h 00:02:22.107 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:22.107 TEST_HEADER include/spdk/nvmf_spec.h 00:02:22.107 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:22.107 TEST_HEADER include/spdk/nvmf_transport.h 00:02:22.107 CC app/iscsi_tgt/iscsi_tgt.o 00:02:22.107 TEST_HEADER include/spdk/opal_spec.h 00:02:22.107 TEST_HEADER include/spdk/pci_ids.h 00:02:22.107 TEST_HEADER include/spdk/opal.h 00:02:22.107 TEST_HEADER include/spdk/pipe.h 00:02:22.107 TEST_HEADER include/spdk/reduce.h 00:02:22.107 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:22.107 TEST_HEADER include/spdk/rpc.h 00:02:22.107 TEST_HEADER include/spdk/queue.h 00:02:22.107 TEST_HEADER include/spdk/scsi.h 00:02:22.107 CC app/spdk_tgt/spdk_tgt.o 00:02:22.107 TEST_HEADER include/spdk/scsi_spec.h 00:02:22.107 TEST_HEADER include/spdk/scheduler.h 00:02:22.107 TEST_HEADER include/spdk/sock.h 00:02:22.107 TEST_HEADER include/spdk/thread.h 00:02:22.107 TEST_HEADER include/spdk/string.h 00:02:22.107 TEST_HEADER include/spdk/stdinc.h 00:02:22.107 CC app/vhost/vhost.o 00:02:22.107 TEST_HEADER include/spdk/tree.h 00:02:22.107 TEST_HEADER include/spdk/trace.h 00:02:22.107 TEST_HEADER include/spdk/trace_parser.h 00:02:22.107 TEST_HEADER include/spdk/ublk.h 00:02:22.107 TEST_HEADER include/spdk/util.h 00:02:22.107 TEST_HEADER include/spdk/uuid.h 00:02:22.107 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:22.107 TEST_HEADER include/spdk/version.h 00:02:22.107 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:22.107 TEST_HEADER include/spdk/vhost.h 00:02:22.107 TEST_HEADER include/spdk/vmd.h 00:02:22.107 TEST_HEADER include/spdk/zipf.h 00:02:22.107 TEST_HEADER include/spdk/xor.h 00:02:22.107 CXX test/cpp_headers/accel_module.o 00:02:22.107 CXX test/cpp_headers/accel.o 00:02:22.107 CXX test/cpp_headers/assert.o 00:02:22.107 CXX test/cpp_headers/barrier.o 00:02:22.107 CXX test/cpp_headers/base64.o 00:02:22.107 CXX test/cpp_headers/bdev.o 00:02:22.107 CXX test/cpp_headers/bdev_zone.o 00:02:22.107 CXX test/cpp_headers/bdev_module.o 00:02:22.107 CXX test/cpp_headers/blob_bdev.o 00:02:22.107 CXX test/cpp_headers/bit_pool.o 00:02:22.107 CXX test/cpp_headers/blobfs.o 00:02:22.107 CXX test/cpp_headers/bit_array.o 00:02:22.107 CXX test/cpp_headers/blobfs_bdev.o 00:02:22.107 CXX test/cpp_headers/blob.o 00:02:22.107 CC test/env/vtophys/vtophys.o 00:02:22.107 CXX test/cpp_headers/config.o 00:02:22.107 CXX test/cpp_headers/conf.o 00:02:22.107 CXX test/cpp_headers/cpuset.o 00:02:22.107 CXX test/cpp_headers/crc16.o 00:02:22.107 CC test/accel/dif/dif.o 00:02:22.107 CXX test/cpp_headers/crc64.o 00:02:22.107 CXX test/cpp_headers/dif.o 00:02:22.107 CXX test/cpp_headers/crc32.o 00:02:22.107 CXX test/cpp_headers/endian.o 00:02:22.107 CC test/thread/poller_perf/poller_perf.o 00:02:22.107 CXX test/cpp_headers/env.o 00:02:22.107 CXX test/cpp_headers/dma.o 00:02:22.107 CXX test/cpp_headers/env_dpdk.o 00:02:22.107 CXX test/cpp_headers/fd_group.o 00:02:22.107 CC test/app/histogram_perf/histogram_perf.o 00:02:22.107 CXX test/cpp_headers/event.o 00:02:22.107 CXX test/cpp_headers/file.o 00:02:22.107 CXX test/cpp_headers/ftl.o 00:02:22.107 CXX test/cpp_headers/gpt_spec.o 00:02:22.107 CXX test/cpp_headers/fd.o 00:02:22.107 CXX test/cpp_headers/idxd.o 00:02:22.107 CXX test/cpp_headers/hexlify.o 00:02:22.107 CXX test/cpp_headers/histogram_data.o 00:02:22.107 CXX test/cpp_headers/idxd_spec.o 00:02:22.107 CXX test/cpp_headers/ioat.o 00:02:22.107 CXX test/cpp_headers/init.o 00:02:22.107 CC test/env/memory/memory_ut.o 00:02:22.107 CXX test/cpp_headers/ioat_spec.o 00:02:22.107 CC test/event/reactor_perf/reactor_perf.o 00:02:22.107 CXX test/cpp_headers/jsonrpc.o 00:02:22.107 CXX test/cpp_headers/iscsi_spec.o 00:02:22.107 CXX test/cpp_headers/json.o 00:02:22.107 CXX test/cpp_headers/likely.o 00:02:22.107 CXX test/cpp_headers/lvol.o 00:02:22.107 CC test/env/pci/pci_ut.o 00:02:22.107 CXX test/cpp_headers/memory.o 00:02:22.107 CC examples/sock/hello_world/hello_sock.o 00:02:22.107 CXX test/cpp_headers/log.o 00:02:22.107 CC test/blobfs/mkfs/mkfs.o 00:02:22.107 CXX test/cpp_headers/nbd.o 00:02:22.107 CXX test/cpp_headers/mmio.o 00:02:22.107 CC test/nvme/startup/startup.o 00:02:22.107 CXX test/cpp_headers/notify.o 00:02:22.107 CXX test/cpp_headers/nvme.o 00:02:22.107 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:22.107 CC test/app/stub/stub.o 00:02:22.107 CC test/nvme/boot_partition/boot_partition.o 00:02:22.107 CC test/event/event_perf/event_perf.o 00:02:22.107 CC test/nvme/reset/reset.o 00:02:22.107 CXX test/cpp_headers/nvme_intel.o 00:02:22.107 CC test/nvme/reserve/reserve.o 00:02:22.107 CXX test/cpp_headers/nvme_ocssd.o 00:02:22.107 CC test/app/bdev_svc/bdev_svc.o 00:02:22.107 CC test/event/reactor/reactor.o 00:02:22.107 CC test/app/jsoncat/jsoncat.o 00:02:22.107 CC test/nvme/e2edp/nvme_dp.o 00:02:22.107 CC test/nvme/compliance/nvme_compliance.o 00:02:22.107 CC test/nvme/overhead/overhead.o 00:02:22.107 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:22.107 CC examples/nvme/arbitration/arbitration.o 00:02:22.107 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:22.107 CC app/fio/nvme/fio_plugin.o 00:02:22.107 CC examples/nvme/hello_world/hello_world.o 00:02:22.107 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:22.107 CC test/nvme/fused_ordering/fused_ordering.o 00:02:22.107 CC examples/accel/perf/accel_perf.o 00:02:22.107 CC test/nvme/err_injection/err_injection.o 00:02:22.107 CC test/nvme/connect_stress/connect_stress.o 00:02:22.107 CC test/event/app_repeat/app_repeat.o 00:02:22.107 CC examples/ioat/perf/perf.o 00:02:22.107 CC test/nvme/simple_copy/simple_copy.o 00:02:22.107 CC examples/ioat/verify/verify.o 00:02:22.107 CC test/nvme/fdp/fdp.o 00:02:22.107 CC examples/util/zipf/zipf.o 00:02:22.107 CC examples/nvme/reconnect/reconnect.o 00:02:22.107 CC test/bdev/bdevio/bdevio.o 00:02:22.107 CC examples/vmd/led/led.o 00:02:22.107 CC examples/nvme/hotplug/hotplug.o 00:02:22.107 CC test/nvme/aer/aer.o 00:02:22.107 CC test/nvme/sgl/sgl.o 00:02:22.107 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.107 CC examples/vmd/lsvmd/lsvmd.o 00:02:22.107 CC examples/nvme/abort/abort.o 00:02:22.107 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.107 CC test/event/scheduler/scheduler.o 00:02:22.377 CC test/nvme/cuse/cuse.o 00:02:22.377 CC examples/blob/hello_world/hello_blob.o 00:02:22.377 CC examples/idxd/perf/perf.o 00:02:22.377 CC app/fio/bdev/fio_plugin.o 00:02:22.377 CC test/dma/test_dma/test_dma.o 00:02:22.377 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.377 CXX test/cpp_headers/nvme_spec.o 00:02:22.377 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:22.377 CC examples/blob/cli/blobcli.o 00:02:22.377 CC examples/thread/thread/thread_ex.o 00:02:22.377 CC examples/nvmf/nvmf/nvmf.o 00:02:22.377 CC test/env/mem_callbacks/mem_callbacks.o 00:02:22.377 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:22.377 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:22.637 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:22.637 CC test/lvol/esnap/esnap.o 00:02:22.637 LINK histogram_perf 00:02:22.637 LINK spdk_tgt 00:02:22.637 LINK nvmf_tgt 00:02:22.637 LINK poller_perf 00:02:22.637 LINK iscsi_tgt 00:02:22.637 LINK rpc_client_test 00:02:22.637 LINK reactor 00:02:22.637 LINK vtophys 00:02:22.637 LINK spdk_lspci 00:02:22.637 LINK event_perf 00:02:22.637 LINK reactor_perf 00:02:22.637 LINK zipf 00:02:22.637 LINK jsoncat 00:02:22.900 LINK spdk_trace_record 00:02:22.900 LINK spdk_nvme_discover 00:02:22.900 LINK mkfs 00:02:22.900 LINK interrupt_tgt 00:02:22.900 LINK app_repeat 00:02:22.900 LINK env_dpdk_post_init 00:02:22.900 LINK stub 00:02:22.900 LINK bdev_svc 00:02:22.900 LINK boot_partition 00:02:22.900 LINK reserve 00:02:22.900 LINK reset 00:02:22.900 LINK vhost 00:02:22.900 CXX test/cpp_headers/nvme_zns.o 00:02:22.900 CXX test/cpp_headers/nvmf_cmd.o 00:02:22.900 LINK startup 00:02:22.900 LINK verify 00:02:22.900 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:22.900 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:22.900 LINK hello_blob 00:02:22.900 LINK scheduler 00:02:22.900 CXX test/cpp_headers/nvmf.o 00:02:22.900 LINK ioat_perf 00:02:22.900 CXX test/cpp_headers/nvmf_spec.o 00:02:22.900 LINK hello_sock 00:02:22.900 CXX test/cpp_headers/nvmf_transport.o 00:02:22.900 LINK err_injection 00:02:22.900 CXX test/cpp_headers/opal.o 00:02:22.900 CXX test/cpp_headers/opal_spec.o 00:02:22.900 CXX test/cpp_headers/pci_ids.o 00:02:22.900 CXX test/cpp_headers/pipe.o 00:02:22.900 CXX test/cpp_headers/queue.o 00:02:22.900 CXX test/cpp_headers/reduce.o 00:02:22.900 LINK overhead 00:02:22.900 CXX test/cpp_headers/rpc.o 00:02:22.900 CXX test/cpp_headers/scheduler.o 00:02:22.900 LINK connect_stress 00:02:22.900 CXX test/cpp_headers/scsi_spec.o 00:02:22.900 CXX test/cpp_headers/scsi.o 00:02:22.900 CXX test/cpp_headers/stdinc.o 00:02:22.900 CXX test/cpp_headers/sock.o 00:02:22.900 CXX test/cpp_headers/thread.o 00:02:22.900 CXX test/cpp_headers/string.o 00:02:22.900 CXX test/cpp_headers/trace.o 00:02:22.900 LINK doorbell_aers 00:02:22.900 CXX test/cpp_headers/trace_parser.o 00:02:22.900 CXX test/cpp_headers/ublk.o 00:02:23.157 CXX test/cpp_headers/version.o 00:02:23.157 CXX test/cpp_headers/vfio_user_pci.o 00:02:23.157 CXX test/cpp_headers/tree.o 00:02:23.157 CXX test/cpp_headers/uuid.o 00:02:23.158 CXX test/cpp_headers/util.o 00:02:23.158 CXX test/cpp_headers/vfio_user_spec.o 00:02:23.158 CXX test/cpp_headers/vhost.o 00:02:23.158 LINK lsvmd 00:02:23.158 LINK fused_ordering 00:02:23.158 LINK sgl 00:02:23.158 LINK led 00:02:23.158 CXX test/cpp_headers/vmd.o 00:02:23.158 LINK hotplug 00:02:23.158 CXX test/cpp_headers/xor.o 00:02:23.158 CXX test/cpp_headers/zipf.o 00:02:23.158 LINK spdk_dd 00:02:23.158 LINK cmb_copy 00:02:23.158 LINK pmr_persistence 00:02:23.158 LINK hello_bdev 00:02:23.158 LINK hello_world 00:02:23.158 LINK nvme_compliance 00:02:23.158 LINK simple_copy 00:02:23.158 LINK bdevio 00:02:23.158 LINK reconnect 00:02:23.158 LINK thread 00:02:23.158 LINK nvme_dp 00:02:23.416 LINK arbitration 00:02:23.416 LINK nvme_fuzz 00:02:23.416 LINK dif 00:02:23.416 LINK test_dma 00:02:23.416 LINK aer 00:02:23.416 LINK idxd_perf 00:02:23.416 LINK fdp 00:02:23.416 LINK spdk_nvme 00:02:23.416 LINK nvmf 00:02:23.416 LINK abort 00:02:23.416 LINK spdk_bdev 00:02:23.416 LINK pci_ut 00:02:23.416 LINK accel_perf 00:02:23.674 LINK blobcli 00:02:23.674 LINK spdk_nvme_perf 00:02:23.674 LINK nvme_manage 00:02:23.674 LINK mem_callbacks 00:02:23.674 LINK spdk_trace 00:02:23.674 LINK vhost_fuzz 00:02:23.674 LINK spdk_top 00:02:23.674 LINK memory_ut 00:02:23.933 LINK spdk_nvme_identify 00:02:23.933 LINK cuse 00:02:23.933 LINK bdevperf 00:02:24.498 LINK iscsi_fuzz 00:02:26.398 LINK esnap 00:02:26.657 00:02:26.657 real 0m34.431s 00:02:26.657 user 5m38.515s 00:02:26.657 sys 4m28.549s 00:02:26.657 19:57:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:26.657 19:57:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.657 ************************************ 00:02:26.657 END TEST make 00:02:26.657 ************************************ 00:02:26.657 19:57:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:02:26.657 19:57:24 -- nvmf/common.sh@7 -- # uname -s 00:02:26.657 19:57:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:26.657 19:57:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:26.657 19:57:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:26.657 19:57:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:26.657 19:57:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:26.657 19:57:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:26.657 19:57:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:26.657 19:57:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:26.657 19:57:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:26.657 19:57:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:26.657 19:57:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:02:26.657 19:57:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:02:26.657 19:57:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:26.657 19:57:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:26.657 19:57:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:26.657 19:57:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:02:26.657 19:57:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:26.657 19:57:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.657 19:57:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.657 19:57:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.657 19:57:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.657 19:57:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.657 19:57:24 -- paths/export.sh@5 -- # export PATH 00:02:26.657 19:57:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.657 19:57:24 -- nvmf/common.sh@46 -- # : 0 00:02:26.657 19:57:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:26.657 19:57:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:26.657 19:57:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:26.657 19:57:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:26.657 19:57:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:26.657 19:57:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:26.657 19:57:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:26.657 19:57:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:26.657 19:57:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:26.657 19:57:24 -- spdk/autotest.sh@32 -- # uname -s 00:02:26.657 19:57:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:26.657 19:57:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:26.657 19:57:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:02:26.657 19:57:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:26.657 19:57:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:02:26.657 19:57:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:26.657 19:57:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:26.657 19:57:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:26.657 19:57:24 -- spdk/autotest.sh@48 -- # udevadm_pid=1261739 00:02:26.657 19:57:24 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:02:26.657 19:57:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:26.657 19:57:24 -- spdk/autotest.sh@54 -- # echo 1261741 00:02:26.657 19:57:24 -- spdk/autotest.sh@56 -- # echo 1261742 00:02:26.657 19:57:24 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:26.657 19:57:24 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:02:26.657 19:57:24 -- spdk/autotest.sh@60 -- # echo 1261743 00:02:26.657 19:57:24 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:02:26.657 19:57:24 -- spdk/autotest.sh@62 -- # echo 1261744 00:02:26.657 19:57:24 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:26.657 19:57:24 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:26.657 19:57:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:26.657 19:57:24 -- common/autotest_common.sh@10 -- # set +x 00:02:26.657 19:57:24 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l 00:02:26.657 19:57:24 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l 00:02:26.657 19:57:24 -- spdk/autotest.sh@70 -- # create_test_list 00:02:26.657 19:57:24 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:26.657 19:57:24 -- common/autotest_common.sh@10 -- # set +x 00:02:26.657 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:26.657 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:26.657 19:57:24 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:02:26.657 19:57:24 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:26.657 19:57:24 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:26.657 19:57:24 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:02:26.657 19:57:24 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:26.657 19:57:24 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:26.657 19:57:24 -- common/autotest_common.sh@1440 -- # uname 00:02:26.657 19:57:24 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:26.657 19:57:24 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:26.657 19:57:24 -- common/autotest_common.sh@1460 -- # uname 00:02:26.657 19:57:24 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:26.657 19:57:24 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:26.657 19:57:24 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:26.657 19:57:24 -- spdk/autotest.sh@83 -- # hash lcov 00:02:26.657 19:57:24 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:26.657 19:57:24 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:26.657 --rc lcov_branch_coverage=1 00:02:26.657 --rc lcov_function_coverage=1 00:02:26.657 --rc genhtml_branch_coverage=1 00:02:26.657 --rc genhtml_function_coverage=1 00:02:26.657 --rc genhtml_legend=1 00:02:26.657 --rc geninfo_all_blocks=1 00:02:26.657 ' 00:02:26.657 19:57:24 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:26.657 --rc lcov_branch_coverage=1 00:02:26.657 --rc lcov_function_coverage=1 00:02:26.657 --rc genhtml_branch_coverage=1 00:02:26.657 --rc genhtml_function_coverage=1 00:02:26.657 --rc genhtml_legend=1 00:02:26.657 --rc geninfo_all_blocks=1 00:02:26.657 ' 00:02:26.657 19:57:24 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:26.657 --rc lcov_branch_coverage=1 00:02:26.657 --rc lcov_function_coverage=1 00:02:26.657 --rc genhtml_branch_coverage=1 00:02:26.657 --rc genhtml_function_coverage=1 00:02:26.657 --rc genhtml_legend=1 00:02:26.657 --rc geninfo_all_blocks=1 00:02:26.657 --no-external' 00:02:26.657 19:57:24 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:26.657 --rc lcov_branch_coverage=1 00:02:26.657 --rc lcov_function_coverage=1 00:02:26.657 --rc genhtml_branch_coverage=1 00:02:26.657 --rc genhtml_function_coverage=1 00:02:26.657 --rc genhtml_legend=1 00:02:26.657 --rc geninfo_all_blocks=1 00:02:26.657 --no-external' 00:02:26.657 19:57:24 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:26.657 lcov: LCOV version 1.14 00:02:26.657 19:57:24 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:02:30.839 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:30.839 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:30.839 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:30.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:40.810 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:40.810 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:40.811 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:40.811 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:40.812 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:40.812 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:42.188 19:57:39 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:42.188 19:57:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:42.188 19:57:39 -- common/autotest_common.sh@10 -- # set +x 00:02:42.188 19:57:39 -- spdk/autotest.sh@102 -- # rm -f 00:02:42.188 19:57:39 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.731 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:02:44.731 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:02:44.731 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:02:44.731 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:02:44.731 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:02:44.731 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:02:44.731 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:02:44.731 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:02:44.731 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:02:44.731 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:02:44.731 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:02:44.731 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:02:44.731 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:02:44.731 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:02:44.731 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:02:44.731 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:02:44.731 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:02:44.731 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:02:44.731 19:57:42 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:44.731 19:57:42 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:44.731 19:57:42 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:44.731 19:57:42 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:44.731 19:57:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:44.731 19:57:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:44.731 19:57:42 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:44.731 19:57:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.731 19:57:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:44.731 19:57:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:44.732 19:57:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:02:44.732 19:57:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:02:44.732 19:57:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:44.732 19:57:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:44.732 19:57:42 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:44.732 19:57:42 -- spdk/autotest.sh@121 -- # grep -v p 00:02:44.732 19:57:42 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 00:02:44.732 19:57:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:44.732 19:57:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:44.732 19:57:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:44.732 19:57:42 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:44.732 19:57:42 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:44.732 No valid GPT data, bailing 00:02:44.732 19:57:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:44.732 19:57:42 -- scripts/common.sh@393 -- # pt= 00:02:44.732 19:57:42 -- scripts/common.sh@394 -- # return 1 00:02:44.732 19:57:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:44.732 1+0 records in 00:02:44.732 1+0 records out 00:02:44.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00322529 s, 325 MB/s 00:02:44.732 19:57:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:44.732 19:57:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:44.732 19:57:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:02:44.732 19:57:42 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:02:44.732 19:57:42 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:44.732 No valid GPT data, bailing 00:02:44.732 19:57:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:44.732 19:57:42 -- scripts/common.sh@393 -- # pt= 00:02:44.732 19:57:42 -- scripts/common.sh@394 -- # return 1 00:02:44.732 19:57:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:44.732 1+0 records in 00:02:44.732 1+0 records out 00:02:44.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403593 s, 260 MB/s 00:02:44.732 19:57:42 -- spdk/autotest.sh@129 -- # sync 00:02:44.732 19:57:42 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:44.732 19:57:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:44.732 19:57:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.081 19:57:47 -- spdk/autotest.sh@135 -- # uname -s 00:02:50.081 19:57:47 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:50.081 19:57:47 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.081 19:57:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:50.081 19:57:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:50.081 19:57:47 -- common/autotest_common.sh@10 -- # set +x 00:02:50.081 ************************************ 00:02:50.081 START TEST setup.sh 00:02:50.081 ************************************ 00:02:50.081 19:57:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.081 * Looking for test storage... 00:02:50.081 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:50.081 19:57:47 -- setup/test-setup.sh@10 -- # uname -s 00:02:50.081 19:57:47 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:50.081 19:57:47 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:50.081 19:57:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:50.081 19:57:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:50.081 19:57:47 -- common/autotest_common.sh@10 -- # set +x 00:02:50.081 ************************************ 00:02:50.081 START TEST acl 00:02:50.081 ************************************ 00:02:50.081 19:57:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:50.081 * Looking for test storage... 00:02:50.081 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:50.081 19:57:47 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.081 19:57:47 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:50.081 19:57:47 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:50.081 19:57:47 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:50.081 19:57:47 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:50.081 19:57:47 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:50.081 19:57:47 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:50.081 19:57:47 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.081 19:57:47 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:50.081 19:57:47 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:50.081 19:57:47 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:02:50.081 19:57:47 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:02:50.081 19:57:47 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:50.081 19:57:47 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:50.081 19:57:47 -- setup/acl.sh@12 -- # devs=() 00:02:50.081 19:57:47 -- setup/acl.sh@12 -- # declare -a devs 00:02:50.081 19:57:47 -- setup/acl.sh@13 -- # drivers=() 00:02:50.081 19:57:47 -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.081 19:57:47 -- setup/acl.sh@51 -- # setup reset 00:02:50.081 19:57:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.081 19:57:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.381 19:57:51 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:53.381 19:57:51 -- setup/acl.sh@16 -- # local dev driver 00:02:53.381 19:57:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.381 19:57:51 -- setup/acl.sh@15 -- # setup output status 00:02:53.381 19:57:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.382 19:57:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:02:55.926 Hugepages 00:02:55.926 node hugesize free / total 00:02:55.926 19:57:53 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:55.926 19:57:53 -- setup/acl.sh@19 -- # continue 00:02:55.926 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.926 19:57:53 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:55.926 19:57:53 -- setup/acl.sh@19 -- # continue 00:02:55.926 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.926 19:57:53 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:55.926 19:57:53 -- setup/acl.sh@19 -- # continue 00:02:55.926 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.926 00:02:55.926 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:55.926 19:57:53 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:55.926 19:57:53 -- setup/acl.sh@19 -- # continue 00:02:55.926 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.186 19:57:53 -- setup/acl.sh@19 -- # [[ 0000:03:00.0 == *:*:*.* ]] 00:02:56.186 19:57:53 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:56.186 19:57:53 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:02:56.186 19:57:53 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:56.186 19:57:53 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:56.186 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.186 19:57:53 -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:02:56.186 19:57:53 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.186 19:57:53 -- setup/acl.sh@20 -- # continue 00:02:56.186 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.186 19:57:53 -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:02:56.186 19:57:53 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.186 19:57:53 -- setup/acl.sh@20 -- # continue 00:02:56.186 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.186 19:57:53 -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:53 -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:53 -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:53 -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:53 -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:53 -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:53 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:56.187 19:57:54 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:56.187 19:57:54 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:56.187 19:57:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:56.187 19:57:54 -- setup/acl.sh@20 -- # continue 00:02:56.187 19:57:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.187 19:57:54 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:02:56.187 19:57:54 -- setup/acl.sh@54 -- # run_test denied denied 00:02:56.187 19:57:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:56.187 19:57:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:56.187 19:57:54 -- common/autotest_common.sh@10 -- # set +x 00:02:56.187 ************************************ 00:02:56.187 START TEST denied 00:02:56.187 ************************************ 00:02:56.187 19:57:54 -- common/autotest_common.sh@1104 -- # denied 00:02:56.187 19:57:54 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:03:00.0' 00:02:56.187 19:57:54 -- setup/acl.sh@38 -- # setup output config 00:02:56.187 19:57:54 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:03:00.0' 00:02:56.187 19:57:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.187 19:57:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:00.392 0000:03:00.0 (1344 51c3): Skipping denied controller at 0000:03:00.0 00:03:00.392 19:57:57 -- setup/acl.sh@40 -- # verify 0000:03:00.0 00:03:00.392 19:57:57 -- setup/acl.sh@28 -- # local dev driver 00:03:00.392 19:57:57 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:00.392 19:57:57 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:03:00.0 ]] 00:03:00.392 19:57:57 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:03:00.0/driver 00:03:00.392 19:57:57 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:00.392 19:57:57 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:00.392 19:57:57 -- setup/acl.sh@41 -- # setup reset 00:03:00.392 19:57:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.392 19:57:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.599 00:03:04.599 real 0m7.756s 00:03:04.599 user 0m1.837s 00:03:04.599 sys 0m3.770s 00:03:04.599 19:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.599 19:58:01 -- common/autotest_common.sh@10 -- # set +x 00:03:04.599 ************************************ 00:03:04.599 END TEST denied 00:03:04.599 ************************************ 00:03:04.599 19:58:01 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:04.599 19:58:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.599 19:58:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.599 19:58:01 -- common/autotest_common.sh@10 -- # set +x 00:03:04.600 ************************************ 00:03:04.600 START TEST allowed 00:03:04.600 ************************************ 00:03:04.600 19:58:01 -- common/autotest_common.sh@1104 -- # allowed 00:03:04.600 19:58:01 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:03:00.0 00:03:04.600 19:58:01 -- setup/acl.sh@45 -- # setup output config 00:03:04.600 19:58:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.600 19:58:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:04.600 19:58:01 -- setup/acl.sh@46 -- # grep -E '0000:03:00.0 .*: nvme -> .*' 00:03:07.899 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:03:07.899 19:58:05 -- setup/acl.sh@47 -- # verify 0000:c9:00.0 00:03:07.899 19:58:05 -- setup/acl.sh@28 -- # local dev driver 00:03:07.899 19:58:05 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:07.899 19:58:05 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:03:07.899 19:58:05 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:03:07.899 19:58:05 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:07.899 19:58:05 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:07.899 19:58:05 -- setup/acl.sh@48 -- # setup reset 00:03:07.899 19:58:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.899 19:58:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.199 00:03:11.199 real 0m7.052s 00:03:11.199 user 0m1.975s 00:03:11.199 sys 0m3.946s 00:03:11.199 19:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.199 19:58:08 -- common/autotest_common.sh@10 -- # set +x 00:03:11.199 ************************************ 00:03:11.199 END TEST allowed 00:03:11.199 ************************************ 00:03:11.199 00:03:11.199 real 0m21.039s 00:03:11.199 user 0m5.928s 00:03:11.199 sys 0m11.668s 00:03:11.199 19:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.200 19:58:08 -- common/autotest_common.sh@10 -- # set +x 00:03:11.200 ************************************ 00:03:11.200 END TEST acl 00:03:11.200 ************************************ 00:03:11.200 19:58:08 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.200 19:58:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:11.200 19:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:11.200 19:58:08 -- common/autotest_common.sh@10 -- # set +x 00:03:11.200 ************************************ 00:03:11.200 START TEST hugepages 00:03:11.200 ************************************ 00:03:11.200 19:58:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.200 * Looking for test storage... 00:03:11.200 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:11.200 19:58:09 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:11.200 19:58:09 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:11.200 19:58:09 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:11.200 19:58:09 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:11.200 19:58:09 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:11.200 19:58:09 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:11.200 19:58:09 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:11.200 19:58:09 -- setup/common.sh@18 -- # local node= 00:03:11.200 19:58:09 -- setup/common.sh@19 -- # local var val 00:03:11.200 19:58:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.200 19:58:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.200 19:58:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.200 19:58:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.200 19:58:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.200 19:58:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 107493888 kB' 'MemAvailable: 111229528 kB' 'Buffers: 2696 kB' 'Cached: 10715416 kB' 'SwapCached: 0 kB' 'Active: 7837936 kB' 'Inactive: 3438960 kB' 'Active(anon): 6859764 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568136 kB' 'Mapped: 178772 kB' 'Shmem: 6300980 kB' 'KReclaimable: 286280 kB' 'Slab: 931000 kB' 'SReclaimable: 286280 kB' 'SUnreclaim: 644720 kB' 'KernelStack: 24816 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69510428 kB' 'Committed_AS: 8405800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228496 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.200 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.200 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # continue 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.201 19:58:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.201 19:58:09 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.201 19:58:09 -- setup/common.sh@33 -- # echo 2048 00:03:11.201 19:58:09 -- setup/common.sh@33 -- # return 0 00:03:11.201 19:58:09 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:11.201 19:58:09 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:11.201 19:58:09 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:11.201 19:58:09 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:11.201 19:58:09 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:11.201 19:58:09 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:11.201 19:58:09 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:11.201 19:58:09 -- setup/hugepages.sh@207 -- # get_nodes 00:03:11.201 19:58:09 -- setup/hugepages.sh@27 -- # local node 00:03:11.201 19:58:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.201 19:58:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:11.201 19:58:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.201 19:58:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.201 19:58:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.201 19:58:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.201 19:58:09 -- setup/hugepages.sh@208 -- # clear_hp 00:03:11.201 19:58:09 -- setup/hugepages.sh@37 -- # local node hp 00:03:11.201 19:58:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.201 19:58:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.201 19:58:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.201 19:58:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.201 19:58:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.201 19:58:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.201 19:58:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.201 19:58:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.201 19:58:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.201 19:58:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.201 19:58:09 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:11.201 19:58:09 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:11.201 19:58:09 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:11.201 19:58:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:11.201 19:58:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:11.201 19:58:09 -- common/autotest_common.sh@10 -- # set +x 00:03:11.201 ************************************ 00:03:11.201 START TEST default_setup 00:03:11.201 ************************************ 00:03:11.201 19:58:09 -- common/autotest_common.sh@1104 -- # default_setup 00:03:11.201 19:58:09 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:11.201 19:58:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.201 19:58:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:11.201 19:58:09 -- setup/hugepages.sh@51 -- # shift 00:03:11.201 19:58:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:11.201 19:58:09 -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.201 19:58:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.201 19:58:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.201 19:58:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:11.201 19:58:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:11.201 19:58:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.201 19:58:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.201 19:58:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.201 19:58:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.201 19:58:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.201 19:58:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:11.201 19:58:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.201 19:58:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:11.201 19:58:09 -- setup/hugepages.sh@73 -- # return 0 00:03:11.201 19:58:09 -- setup/hugepages.sh@137 -- # setup output 00:03:11.201 19:58:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.201 19:58:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:14.499 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:14.500 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:14.500 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:14.500 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:14.500 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:14.500 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:14.500 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:14.500 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:14.500 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:14.500 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:14.500 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:14.500 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:14.500 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:14.500 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:14.500 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:14.500 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:15.069 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:03:15.069 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:03:15.341 19:58:13 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:15.341 19:58:13 -- setup/hugepages.sh@89 -- # local node 00:03:15.341 19:58:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.341 19:58:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.341 19:58:13 -- setup/hugepages.sh@92 -- # local surp 00:03:15.341 19:58:13 -- setup/hugepages.sh@93 -- # local resv 00:03:15.341 19:58:13 -- setup/hugepages.sh@94 -- # local anon 00:03:15.341 19:58:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.341 19:58:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.341 19:58:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.341 19:58:13 -- setup/common.sh@18 -- # local node= 00:03:15.341 19:58:13 -- setup/common.sh@19 -- # local var val 00:03:15.341 19:58:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.341 19:58:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.341 19:58:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.341 19:58:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.341 19:58:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.341 19:58:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109767548 kB' 'MemAvailable: 113502836 kB' 'Buffers: 2696 kB' 'Cached: 10715656 kB' 'SwapCached: 0 kB' 'Active: 7864536 kB' 'Inactive: 3438960 kB' 'Active(anon): 6886364 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594740 kB' 'Mapped: 179280 kB' 'Shmem: 6301220 kB' 'KReclaimable: 285576 kB' 'Slab: 923544 kB' 'SReclaimable: 285576 kB' 'SUnreclaim: 637968 kB' 'KernelStack: 24528 kB' 'PageTables: 9776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8474032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228352 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.341 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.341 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.342 19:58:13 -- setup/common.sh@33 -- # echo 0 00:03:15.342 19:58:13 -- setup/common.sh@33 -- # return 0 00:03:15.342 19:58:13 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.342 19:58:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.342 19:58:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.342 19:58:13 -- setup/common.sh@18 -- # local node= 00:03:15.342 19:58:13 -- setup/common.sh@19 -- # local var val 00:03:15.342 19:58:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.342 19:58:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.342 19:58:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.342 19:58:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.342 19:58:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.342 19:58:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109767468 kB' 'MemAvailable: 113502756 kB' 'Buffers: 2696 kB' 'Cached: 10715656 kB' 'SwapCached: 0 kB' 'Active: 7864252 kB' 'Inactive: 3438960 kB' 'Active(anon): 6886080 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594488 kB' 'Mapped: 179332 kB' 'Shmem: 6301220 kB' 'KReclaimable: 285576 kB' 'Slab: 923520 kB' 'SReclaimable: 285576 kB' 'SUnreclaim: 637944 kB' 'KernelStack: 24560 kB' 'PageTables: 9852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8474044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228320 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.342 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.342 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.343 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.343 19:58:13 -- setup/common.sh@33 -- # echo 0 00:03:15.343 19:58:13 -- setup/common.sh@33 -- # return 0 00:03:15.343 19:58:13 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.343 19:58:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.343 19:58:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.343 19:58:13 -- setup/common.sh@18 -- # local node= 00:03:15.343 19:58:13 -- setup/common.sh@19 -- # local var val 00:03:15.343 19:58:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.343 19:58:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.343 19:58:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.343 19:58:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.343 19:58:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.343 19:58:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.343 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109766796 kB' 'MemAvailable: 113502084 kB' 'Buffers: 2696 kB' 'Cached: 10715668 kB' 'SwapCached: 0 kB' 'Active: 7864316 kB' 'Inactive: 3438960 kB' 'Active(anon): 6886144 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594560 kB' 'Mapped: 179272 kB' 'Shmem: 6301232 kB' 'KReclaimable: 285576 kB' 'Slab: 923588 kB' 'SReclaimable: 285576 kB' 'SUnreclaim: 638012 kB' 'KernelStack: 24528 kB' 'PageTables: 9776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8475564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228368 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.344 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.344 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.345 19:58:13 -- setup/common.sh@33 -- # echo 0 00:03:15.345 19:58:13 -- setup/common.sh@33 -- # return 0 00:03:15.345 19:58:13 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.345 19:58:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.345 nr_hugepages=1024 00:03:15.345 19:58:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.345 resv_hugepages=0 00:03:15.345 19:58:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.345 surplus_hugepages=0 00:03:15.345 19:58:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.345 anon_hugepages=0 00:03:15.345 19:58:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.345 19:58:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.345 19:58:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.345 19:58:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.345 19:58:13 -- setup/common.sh@18 -- # local node= 00:03:15.345 19:58:13 -- setup/common.sh@19 -- # local var val 00:03:15.345 19:58:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.345 19:58:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.345 19:58:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.345 19:58:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.345 19:58:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.345 19:58:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109763524 kB' 'MemAvailable: 113498812 kB' 'Buffers: 2696 kB' 'Cached: 10715680 kB' 'SwapCached: 0 kB' 'Active: 7864148 kB' 'Inactive: 3438960 kB' 'Active(anon): 6885976 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594328 kB' 'Mapped: 179272 kB' 'Shmem: 6301244 kB' 'KReclaimable: 285576 kB' 'Slab: 923588 kB' 'SReclaimable: 285576 kB' 'SUnreclaim: 638012 kB' 'KernelStack: 24608 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8475580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228432 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.345 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.345 19:58:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.346 19:58:13 -- setup/common.sh@33 -- # echo 1024 00:03:15.346 19:58:13 -- setup/common.sh@33 -- # return 0 00:03:15.346 19:58:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.346 19:58:13 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.346 19:58:13 -- setup/hugepages.sh@27 -- # local node 00:03:15.346 19:58:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.346 19:58:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.346 19:58:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.346 19:58:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.346 19:58:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.346 19:58:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.346 19:58:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.346 19:58:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.346 19:58:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.346 19:58:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.346 19:58:13 -- setup/common.sh@18 -- # local node=0 00:03:15.346 19:58:13 -- setup/common.sh@19 -- # local var val 00:03:15.346 19:58:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.346 19:58:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.346 19:58:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.346 19:58:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.346 19:58:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.346 19:58:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60336268 kB' 'MemUsed: 5419712 kB' 'SwapCached: 0 kB' 'Active: 1795368 kB' 'Inactive: 84312 kB' 'Active(anon): 1435512 kB' 'Inactive(anon): 0 kB' 'Active(file): 359856 kB' 'Inactive(file): 84312 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1826248 kB' 'Mapped: 34200 kB' 'AnonPages: 62716 kB' 'Shmem: 1382080 kB' 'KernelStack: 10600 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121012 kB' 'Slab: 442056 kB' 'SReclaimable: 121012 kB' 'SUnreclaim: 321044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.346 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.346 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # continue 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.347 19:58:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.347 19:58:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.347 19:58:13 -- setup/common.sh@33 -- # echo 0 00:03:15.347 19:58:13 -- setup/common.sh@33 -- # return 0 00:03:15.347 19:58:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.347 19:58:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.347 19:58:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.347 19:58:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.347 19:58:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.347 node0=1024 expecting 1024 00:03:15.347 19:58:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.347 00:03:15.347 real 0m4.133s 00:03:15.347 user 0m1.043s 00:03:15.347 sys 0m1.881s 00:03:15.347 19:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.347 19:58:13 -- common/autotest_common.sh@10 -- # set +x 00:03:15.347 ************************************ 00:03:15.347 END TEST default_setup 00:03:15.347 ************************************ 00:03:15.347 19:58:13 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:15.347 19:58:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.347 19:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.347 19:58:13 -- common/autotest_common.sh@10 -- # set +x 00:03:15.347 ************************************ 00:03:15.347 START TEST per_node_1G_alloc 00:03:15.347 ************************************ 00:03:15.347 19:58:13 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:15.347 19:58:13 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:15.347 19:58:13 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:15.347 19:58:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.347 19:58:13 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:15.347 19:58:13 -- setup/hugepages.sh@51 -- # shift 00:03:15.347 19:58:13 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:15.347 19:58:13 -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.347 19:58:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.347 19:58:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.347 19:58:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:15.347 19:58:13 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:15.347 19:58:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.347 19:58:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.347 19:58:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.347 19:58:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.347 19:58:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.347 19:58:13 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:15.347 19:58:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.347 19:58:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.347 19:58:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.347 19:58:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.347 19:58:13 -- setup/hugepages.sh@73 -- # return 0 00:03:15.347 19:58:13 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:15.347 19:58:13 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:15.347 19:58:13 -- setup/hugepages.sh@146 -- # setup output 00:03:15.347 19:58:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.347 19:58:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:17.928 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:17.928 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:03:17.928 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:17.928 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:17.928 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:17.928 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:17.928 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:17.928 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:17.928 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:17.928 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:17.928 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:17.928 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:17.928 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:17.928 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:17.928 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:17.928 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:17.928 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:17.928 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:03:18.195 19:58:15 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:18.195 19:58:15 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:18.195 19:58:15 -- setup/hugepages.sh@89 -- # local node 00:03:18.195 19:58:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.195 19:58:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.195 19:58:15 -- setup/hugepages.sh@92 -- # local surp 00:03:18.195 19:58:15 -- setup/hugepages.sh@93 -- # local resv 00:03:18.195 19:58:15 -- setup/hugepages.sh@94 -- # local anon 00:03:18.195 19:58:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.195 19:58:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.195 19:58:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.195 19:58:15 -- setup/common.sh@18 -- # local node= 00:03:18.195 19:58:15 -- setup/common.sh@19 -- # local var val 00:03:18.195 19:58:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.195 19:58:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.195 19:58:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.195 19:58:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.195 19:58:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.195 19:58:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.195 19:58:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109763448 kB' 'MemAvailable: 113498704 kB' 'Buffers: 2696 kB' 'Cached: 10715788 kB' 'SwapCached: 0 kB' 'Active: 7865208 kB' 'Inactive: 3438960 kB' 'Active(anon): 6887036 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594888 kB' 'Mapped: 179292 kB' 'Shmem: 6301352 kB' 'KReclaimable: 285512 kB' 'Slab: 923268 kB' 'SReclaimable: 285512 kB' 'SUnreclaim: 637756 kB' 'KernelStack: 24544 kB' 'PageTables: 9768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8471968 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228464 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.195 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.195 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.196 19:58:15 -- setup/common.sh@33 -- # echo 0 00:03:18.196 19:58:15 -- setup/common.sh@33 -- # return 0 00:03:18.196 19:58:15 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.196 19:58:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.196 19:58:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.196 19:58:15 -- setup/common.sh@18 -- # local node= 00:03:18.196 19:58:15 -- setup/common.sh@19 -- # local var val 00:03:18.196 19:58:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.196 19:58:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.196 19:58:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.196 19:58:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.196 19:58:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.196 19:58:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.196 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.196 19:58:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109765544 kB' 'MemAvailable: 113500800 kB' 'Buffers: 2696 kB' 'Cached: 10715792 kB' 'SwapCached: 0 kB' 'Active: 7865176 kB' 'Inactive: 3438960 kB' 'Active(anon): 6887004 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594820 kB' 'Mapped: 179292 kB' 'Shmem: 6301356 kB' 'KReclaimable: 285512 kB' 'Slab: 923260 kB' 'SReclaimable: 285512 kB' 'SUnreclaim: 637748 kB' 'KernelStack: 24512 kB' 'PageTables: 9652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8471980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228432 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:18.196 19:58:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.197 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.197 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.198 19:58:15 -- setup/common.sh@33 -- # echo 0 00:03:18.198 19:58:15 -- setup/common.sh@33 -- # return 0 00:03:18.198 19:58:15 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.198 19:58:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.198 19:58:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.198 19:58:15 -- setup/common.sh@18 -- # local node= 00:03:18.198 19:58:15 -- setup/common.sh@19 -- # local var val 00:03:18.198 19:58:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.198 19:58:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.198 19:58:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.198 19:58:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.198 19:58:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.198 19:58:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109766148 kB' 'MemAvailable: 113501404 kB' 'Buffers: 2696 kB' 'Cached: 10715796 kB' 'SwapCached: 0 kB' 'Active: 7864184 kB' 'Inactive: 3438960 kB' 'Active(anon): 6886012 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593796 kB' 'Mapped: 179280 kB' 'Shmem: 6301360 kB' 'KReclaimable: 285512 kB' 'Slab: 923260 kB' 'SReclaimable: 285512 kB' 'SUnreclaim: 637748 kB' 'KernelStack: 24528 kB' 'PageTables: 9616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8471996 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228448 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.198 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.198 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.199 19:58:15 -- setup/common.sh@33 -- # echo 0 00:03:18.199 19:58:15 -- setup/common.sh@33 -- # return 0 00:03:18.199 19:58:15 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.199 19:58:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.199 nr_hugepages=1024 00:03:18.199 19:58:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.199 resv_hugepages=0 00:03:18.199 19:58:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.199 surplus_hugepages=0 00:03:18.199 19:58:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.199 anon_hugepages=0 00:03:18.199 19:58:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.199 19:58:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.199 19:58:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.199 19:58:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.199 19:58:15 -- setup/common.sh@18 -- # local node= 00:03:18.199 19:58:15 -- setup/common.sh@19 -- # local var val 00:03:18.199 19:58:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.199 19:58:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.199 19:58:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.199 19:58:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.199 19:58:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.199 19:58:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109766148 kB' 'MemAvailable: 113501404 kB' 'Buffers: 2696 kB' 'Cached: 10715816 kB' 'SwapCached: 0 kB' 'Active: 7864404 kB' 'Inactive: 3438960 kB' 'Active(anon): 6886232 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593988 kB' 'Mapped: 179280 kB' 'Shmem: 6301380 kB' 'KReclaimable: 285512 kB' 'Slab: 923260 kB' 'SReclaimable: 285512 kB' 'SUnreclaim: 637748 kB' 'KernelStack: 24544 kB' 'PageTables: 9664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8472012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228448 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.199 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.199 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:15 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.200 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.200 19:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.200 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.200 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.201 19:58:16 -- setup/common.sh@33 -- # echo 1024 00:03:18.201 19:58:16 -- setup/common.sh@33 -- # return 0 00:03:18.201 19:58:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.201 19:58:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.201 19:58:16 -- setup/hugepages.sh@27 -- # local node 00:03:18.201 19:58:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.201 19:58:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.201 19:58:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.201 19:58:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.201 19:58:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.201 19:58:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.201 19:58:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.201 19:58:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.201 19:58:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.201 19:58:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.201 19:58:16 -- setup/common.sh@18 -- # local node=0 00:03:18.201 19:58:16 -- setup/common.sh@19 -- # local var val 00:03:18.201 19:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.201 19:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.201 19:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.201 19:58:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.201 19:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.201 19:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 61374544 kB' 'MemUsed: 4381436 kB' 'SwapCached: 0 kB' 'Active: 1795652 kB' 'Inactive: 84312 kB' 'Active(anon): 1435796 kB' 'Inactive(anon): 0 kB' 'Active(file): 359856 kB' 'Inactive(file): 84312 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1826344 kB' 'Mapped: 34200 kB' 'AnonPages: 62692 kB' 'Shmem: 1382176 kB' 'KernelStack: 10632 kB' 'PageTables: 3460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121012 kB' 'Slab: 441796 kB' 'SReclaimable: 121012 kB' 'SUnreclaim: 320784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.201 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.201 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@33 -- # echo 0 00:03:18.202 19:58:16 -- setup/common.sh@33 -- # return 0 00:03:18.202 19:58:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.202 19:58:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.202 19:58:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.202 19:58:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.202 19:58:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.202 19:58:16 -- setup/common.sh@18 -- # local node=1 00:03:18.202 19:58:16 -- setup/common.sh@19 -- # local var val 00:03:18.202 19:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.202 19:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.202 19:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.202 19:58:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.202 19:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.202 19:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681976 kB' 'MemFree: 48391500 kB' 'MemUsed: 12290476 kB' 'SwapCached: 0 kB' 'Active: 6068880 kB' 'Inactive: 3354648 kB' 'Active(anon): 5450564 kB' 'Inactive(anon): 0 kB' 'Active(file): 618316 kB' 'Inactive(file): 3354648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8892184 kB' 'Mapped: 145080 kB' 'AnonPages: 531448 kB' 'Shmem: 4919220 kB' 'KernelStack: 13912 kB' 'PageTables: 6204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164500 kB' 'Slab: 481464 kB' 'SReclaimable: 164500 kB' 'SUnreclaim: 316964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.202 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.202 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # continue 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.203 19:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.203 19:58:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.203 19:58:16 -- setup/common.sh@33 -- # echo 0 00:03:18.203 19:58:16 -- setup/common.sh@33 -- # return 0 00:03:18.203 19:58:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.203 19:58:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.203 19:58:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.203 19:58:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.203 19:58:16 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.203 node0=512 expecting 512 00:03:18.203 19:58:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.203 19:58:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.203 19:58:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.203 19:58:16 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.203 node1=512 expecting 512 00:03:18.203 19:58:16 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.203 00:03:18.203 real 0m2.795s 00:03:18.203 user 0m0.970s 00:03:18.203 sys 0m1.591s 00:03:18.203 19:58:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.203 19:58:16 -- common/autotest_common.sh@10 -- # set +x 00:03:18.203 ************************************ 00:03:18.203 END TEST per_node_1G_alloc 00:03:18.203 ************************************ 00:03:18.203 19:58:16 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:18.203 19:58:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:18.203 19:58:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:18.203 19:58:16 -- common/autotest_common.sh@10 -- # set +x 00:03:18.203 ************************************ 00:03:18.203 START TEST even_2G_alloc 00:03:18.203 ************************************ 00:03:18.203 19:58:16 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:18.203 19:58:16 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:18.203 19:58:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.203 19:58:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.203 19:58:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.203 19:58:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.203 19:58:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.203 19:58:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.203 19:58:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.203 19:58:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.203 19:58:16 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.203 19:58:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.203 19:58:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.203 19:58:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.203 19:58:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.203 19:58:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.203 19:58:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.203 19:58:16 -- setup/hugepages.sh@83 -- # : 512 00:03:18.203 19:58:16 -- setup/hugepages.sh@84 -- # : 1 00:03:18.203 19:58:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.203 19:58:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.203 19:58:16 -- setup/hugepages.sh@83 -- # : 0 00:03:18.203 19:58:16 -- setup/hugepages.sh@84 -- # : 0 00:03:18.203 19:58:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.203 19:58:16 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:18.203 19:58:16 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:18.203 19:58:16 -- setup/hugepages.sh@153 -- # setup output 00:03:18.203 19:58:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.203 19:58:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:20.748 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:20.748 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:03:20.748 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:20.748 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:20.748 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:20.748 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:20.748 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:20.748 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:20.748 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:20.748 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:20.748 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:20.748 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:20.748 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:20.748 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:20.748 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:20.748 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:20.748 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:20.748 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:03:21.013 19:58:18 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:21.013 19:58:18 -- setup/hugepages.sh@89 -- # local node 00:03:21.013 19:58:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.013 19:58:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.013 19:58:18 -- setup/hugepages.sh@92 -- # local surp 00:03:21.013 19:58:18 -- setup/hugepages.sh@93 -- # local resv 00:03:21.013 19:58:18 -- setup/hugepages.sh@94 -- # local anon 00:03:21.013 19:58:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.013 19:58:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.013 19:58:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.013 19:58:18 -- setup/common.sh@18 -- # local node= 00:03:21.013 19:58:18 -- setup/common.sh@19 -- # local var val 00:03:21.013 19:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.013 19:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.013 19:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.013 19:58:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.013 19:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.013 19:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109778628 kB' 'MemAvailable: 113513868 kB' 'Buffers: 2696 kB' 'Cached: 10715904 kB' 'SwapCached: 0 kB' 'Active: 7853728 kB' 'Inactive: 3438960 kB' 'Active(anon): 6875556 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583148 kB' 'Mapped: 177912 kB' 'Shmem: 6301468 kB' 'KReclaimable: 285480 kB' 'Slab: 923344 kB' 'SReclaimable: 285480 kB' 'SUnreclaim: 637864 kB' 'KernelStack: 24320 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8413540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228320 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.013 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.013 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.014 19:58:18 -- setup/common.sh@33 -- # echo 0 00:03:21.014 19:58:18 -- setup/common.sh@33 -- # return 0 00:03:21.014 19:58:18 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.014 19:58:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.014 19:58:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.014 19:58:18 -- setup/common.sh@18 -- # local node= 00:03:21.014 19:58:18 -- setup/common.sh@19 -- # local var val 00:03:21.014 19:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.014 19:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.014 19:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.014 19:58:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.014 19:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.014 19:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109778316 kB' 'MemAvailable: 113513556 kB' 'Buffers: 2696 kB' 'Cached: 10715908 kB' 'SwapCached: 0 kB' 'Active: 7853984 kB' 'Inactive: 3438960 kB' 'Active(anon): 6875812 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583456 kB' 'Mapped: 177912 kB' 'Shmem: 6301472 kB' 'KReclaimable: 285480 kB' 'Slab: 923308 kB' 'SReclaimable: 285480 kB' 'SUnreclaim: 637828 kB' 'KernelStack: 24288 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8413548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228272 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.014 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.014 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.015 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.015 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.015 19:58:18 -- setup/common.sh@33 -- # echo 0 00:03:21.015 19:58:18 -- setup/common.sh@33 -- # return 0 00:03:21.015 19:58:18 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.015 19:58:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.015 19:58:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.015 19:58:18 -- setup/common.sh@18 -- # local node= 00:03:21.015 19:58:18 -- setup/common.sh@19 -- # local var val 00:03:21.015 19:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.015 19:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.015 19:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.015 19:58:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.015 19:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.015 19:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109778264 kB' 'MemAvailable: 113513504 kB' 'Buffers: 2696 kB' 'Cached: 10715920 kB' 'SwapCached: 0 kB' 'Active: 7853504 kB' 'Inactive: 3438960 kB' 'Active(anon): 6875332 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582880 kB' 'Mapped: 177908 kB' 'Shmem: 6301484 kB' 'KReclaimable: 285480 kB' 'Slab: 923320 kB' 'SReclaimable: 285480 kB' 'SUnreclaim: 637840 kB' 'KernelStack: 24304 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8413564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228288 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.016 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.016 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.017 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.017 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.017 19:58:18 -- setup/common.sh@33 -- # echo 0 00:03:21.017 19:58:18 -- setup/common.sh@33 -- # return 0 00:03:21.017 19:58:18 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.017 19:58:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.017 nr_hugepages=1024 00:03:21.017 19:58:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.017 resv_hugepages=0 00:03:21.017 19:58:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.017 surplus_hugepages=0 00:03:21.017 19:58:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.017 anon_hugepages=0 00:03:21.017 19:58:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.017 19:58:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.281 19:58:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.281 19:58:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.281 19:58:18 -- setup/common.sh@18 -- # local node= 00:03:21.281 19:58:18 -- setup/common.sh@19 -- # local var val 00:03:21.281 19:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.281 19:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.281 19:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.281 19:58:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.281 19:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.281 19:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.281 19:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109778516 kB' 'MemAvailable: 113513756 kB' 'Buffers: 2696 kB' 'Cached: 10715936 kB' 'SwapCached: 0 kB' 'Active: 7853512 kB' 'Inactive: 3438960 kB' 'Active(anon): 6875340 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582948 kB' 'Mapped: 177908 kB' 'Shmem: 6301500 kB' 'KReclaimable: 285480 kB' 'Slab: 923320 kB' 'SReclaimable: 285480 kB' 'SUnreclaim: 637840 kB' 'KernelStack: 24336 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8413580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228288 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.281 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.281 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.282 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.282 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.282 19:58:18 -- setup/common.sh@33 -- # echo 1024 00:03:21.282 19:58:18 -- setup/common.sh@33 -- # return 0 00:03:21.282 19:58:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.282 19:58:18 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.282 19:58:18 -- setup/hugepages.sh@27 -- # local node 00:03:21.283 19:58:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.283 19:58:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.283 19:58:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.283 19:58:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.283 19:58:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.283 19:58:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.283 19:58:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.283 19:58:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.283 19:58:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.283 19:58:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.283 19:58:18 -- setup/common.sh@18 -- # local node=0 00:03:21.283 19:58:18 -- setup/common.sh@19 -- # local var val 00:03:21.283 19:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.283 19:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.283 19:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.283 19:58:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.283 19:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.283 19:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 61376432 kB' 'MemUsed: 4379548 kB' 'SwapCached: 0 kB' 'Active: 1787976 kB' 'Inactive: 84312 kB' 'Active(anon): 1428120 kB' 'Inactive(anon): 0 kB' 'Active(file): 359856 kB' 'Inactive(file): 84312 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1826376 kB' 'Mapped: 32920 kB' 'AnonPages: 54956 kB' 'Shmem: 1382208 kB' 'KernelStack: 10456 kB' 'PageTables: 2620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121012 kB' 'Slab: 441192 kB' 'SReclaimable: 121012 kB' 'SUnreclaim: 320180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.283 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.283 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@33 -- # echo 0 00:03:21.284 19:58:18 -- setup/common.sh@33 -- # return 0 00:03:21.284 19:58:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.284 19:58:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.284 19:58:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.284 19:58:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.284 19:58:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.284 19:58:18 -- setup/common.sh@18 -- # local node=1 00:03:21.284 19:58:18 -- setup/common.sh@19 -- # local var val 00:03:21.284 19:58:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.284 19:58:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.284 19:58:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.284 19:58:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.284 19:58:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.284 19:58:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681976 kB' 'MemFree: 48402440 kB' 'MemUsed: 12279536 kB' 'SwapCached: 0 kB' 'Active: 6065584 kB' 'Inactive: 3354648 kB' 'Active(anon): 5447268 kB' 'Inactive(anon): 0 kB' 'Active(file): 618316 kB' 'Inactive(file): 3354648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8892268 kB' 'Mapped: 144988 kB' 'AnonPages: 527972 kB' 'Shmem: 4919304 kB' 'KernelStack: 13864 kB' 'PageTables: 5816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164468 kB' 'Slab: 482128 kB' 'SReclaimable: 164468 kB' 'SUnreclaim: 317660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.284 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.284 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # continue 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.285 19:58:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.285 19:58:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.285 19:58:18 -- setup/common.sh@33 -- # echo 0 00:03:21.285 19:58:18 -- setup/common.sh@33 -- # return 0 00:03:21.285 19:58:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.285 19:58:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.285 19:58:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.285 19:58:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.285 19:58:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.285 node0=512 expecting 512 00:03:21.285 19:58:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.285 19:58:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.285 19:58:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.285 19:58:18 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:21.285 node1=512 expecting 512 00:03:21.285 19:58:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:21.285 00:03:21.285 real 0m2.910s 00:03:21.285 user 0m0.958s 00:03:21.285 sys 0m1.721s 00:03:21.285 19:58:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.285 19:58:18 -- common/autotest_common.sh@10 -- # set +x 00:03:21.285 ************************************ 00:03:21.285 END TEST even_2G_alloc 00:03:21.285 ************************************ 00:03:21.285 19:58:19 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:21.285 19:58:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:21.285 19:58:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:21.285 19:58:19 -- common/autotest_common.sh@10 -- # set +x 00:03:21.285 ************************************ 00:03:21.285 START TEST odd_alloc 00:03:21.285 ************************************ 00:03:21.285 19:58:19 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:21.285 19:58:19 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:21.285 19:58:19 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:21.285 19:58:19 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.285 19:58:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.285 19:58:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:21.285 19:58:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.285 19:58:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.285 19:58:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.285 19:58:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:21.285 19:58:19 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.285 19:58:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.285 19:58:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.285 19:58:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.285 19:58:19 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.285 19:58:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.285 19:58:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.285 19:58:19 -- setup/hugepages.sh@83 -- # : 513 00:03:21.285 19:58:19 -- setup/hugepages.sh@84 -- # : 1 00:03:21.285 19:58:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.285 19:58:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:21.285 19:58:19 -- setup/hugepages.sh@83 -- # : 0 00:03:21.285 19:58:19 -- setup/hugepages.sh@84 -- # : 0 00:03:21.285 19:58:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.285 19:58:19 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:21.285 19:58:19 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:21.285 19:58:19 -- setup/hugepages.sh@160 -- # setup output 00:03:21.285 19:58:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.285 19:58:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:23.834 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:23.834 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:03:23.834 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:23.834 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:23.834 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:23.834 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:23.834 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:23.834 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:23.834 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:23.834 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:23.834 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:23.834 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:23.834 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:23.834 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:23.834 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:23.834 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:23.834 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:23.834 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:03:23.834 19:58:21 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:23.834 19:58:21 -- setup/hugepages.sh@89 -- # local node 00:03:23.834 19:58:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.834 19:58:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.834 19:58:21 -- setup/hugepages.sh@92 -- # local surp 00:03:23.834 19:58:21 -- setup/hugepages.sh@93 -- # local resv 00:03:23.834 19:58:21 -- setup/hugepages.sh@94 -- # local anon 00:03:23.834 19:58:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.834 19:58:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.834 19:58:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.834 19:58:21 -- setup/common.sh@18 -- # local node= 00:03:23.834 19:58:21 -- setup/common.sh@19 -- # local var val 00:03:23.834 19:58:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.834 19:58:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.834 19:58:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.834 19:58:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.834 19:58:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.834 19:58:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.834 19:58:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109753192 kB' 'MemAvailable: 113488432 kB' 'Buffers: 2696 kB' 'Cached: 10716048 kB' 'SwapCached: 0 kB' 'Active: 7853128 kB' 'Inactive: 3438960 kB' 'Active(anon): 6874956 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582620 kB' 'Mapped: 177916 kB' 'Shmem: 6301612 kB' 'KReclaimable: 285480 kB' 'Slab: 922908 kB' 'SReclaimable: 285480 kB' 'SUnreclaim: 637428 kB' 'KernelStack: 24368 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557980 kB' 'Committed_AS: 8414324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228256 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.834 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.834 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.835 19:58:21 -- setup/common.sh@33 -- # echo 0 00:03:23.835 19:58:21 -- setup/common.sh@33 -- # return 0 00:03:23.835 19:58:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:23.835 19:58:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.835 19:58:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.835 19:58:21 -- setup/common.sh@18 -- # local node= 00:03:23.835 19:58:21 -- setup/common.sh@19 -- # local var val 00:03:23.835 19:58:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.835 19:58:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.835 19:58:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.835 19:58:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.835 19:58:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.835 19:58:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.835 19:58:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109754172 kB' 'MemAvailable: 113489396 kB' 'Buffers: 2696 kB' 'Cached: 10716048 kB' 'SwapCached: 0 kB' 'Active: 7853608 kB' 'Inactive: 3438960 kB' 'Active(anon): 6875436 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583160 kB' 'Mapped: 177916 kB' 'Shmem: 6301612 kB' 'KReclaimable: 285448 kB' 'Slab: 922824 kB' 'SReclaimable: 285448 kB' 'SUnreclaim: 637376 kB' 'KernelStack: 24368 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557980 kB' 'Committed_AS: 8414336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228224 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.835 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.835 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.836 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.836 19:58:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # continue 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.837 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.837 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.837 19:58:21 -- setup/common.sh@33 -- # echo 0 00:03:23.837 19:58:21 -- setup/common.sh@33 -- # return 0 00:03:23.837 19:58:21 -- setup/hugepages.sh@99 -- # surp=0 00:03:23.837 19:58:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.837 19:58:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.837 19:58:21 -- setup/common.sh@18 -- # local node= 00:03:23.837 19:58:21 -- setup/common.sh@19 -- # local var val 00:03:23.837 19:58:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.837 19:58:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.837 19:58:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.837 19:58:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.837 19:58:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.101 19:58:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109754300 kB' 'MemAvailable: 113489524 kB' 'Buffers: 2696 kB' 'Cached: 10716060 kB' 'SwapCached: 0 kB' 'Active: 7853836 kB' 'Inactive: 3438960 kB' 'Active(anon): 6875664 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583380 kB' 'Mapped: 177916 kB' 'Shmem: 6301624 kB' 'KReclaimable: 285448 kB' 'Slab: 922844 kB' 'SReclaimable: 285448 kB' 'SUnreclaim: 637396 kB' 'KernelStack: 24384 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557980 kB' 'Committed_AS: 8414348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228272 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.101 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.101 19:58:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.102 19:58:21 -- setup/common.sh@33 -- # echo 0 00:03:24.102 19:58:21 -- setup/common.sh@33 -- # return 0 00:03:24.102 19:58:21 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.102 19:58:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:24.102 nr_hugepages=1025 00:03:24.102 19:58:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.102 resv_hugepages=0 00:03:24.102 19:58:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.102 surplus_hugepages=0 00:03:24.102 19:58:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.102 anon_hugepages=0 00:03:24.102 19:58:21 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:24.102 19:58:21 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:24.102 19:58:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.102 19:58:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.102 19:58:21 -- setup/common.sh@18 -- # local node= 00:03:24.102 19:58:21 -- setup/common.sh@19 -- # local var val 00:03:24.102 19:58:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.102 19:58:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.102 19:58:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.102 19:58:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.102 19:58:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.102 19:58:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109753820 kB' 'MemAvailable: 113489044 kB' 'Buffers: 2696 kB' 'Cached: 10716064 kB' 'SwapCached: 0 kB' 'Active: 7853524 kB' 'Inactive: 3438960 kB' 'Active(anon): 6875352 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583040 kB' 'Mapped: 177916 kB' 'Shmem: 6301628 kB' 'KReclaimable: 285448 kB' 'Slab: 922844 kB' 'SReclaimable: 285448 kB' 'SUnreclaim: 637396 kB' 'KernelStack: 24368 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557980 kB' 'Committed_AS: 8414364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228272 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.102 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.102 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.103 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.103 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.103 19:58:21 -- setup/common.sh@33 -- # echo 1025 00:03:24.103 19:58:21 -- setup/common.sh@33 -- # return 0 00:03:24.103 19:58:21 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:24.103 19:58:21 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.103 19:58:21 -- setup/hugepages.sh@27 -- # local node 00:03:24.103 19:58:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.103 19:58:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.104 19:58:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.104 19:58:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:24.104 19:58:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.104 19:58:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.104 19:58:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.104 19:58:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.104 19:58:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.104 19:58:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.104 19:58:21 -- setup/common.sh@18 -- # local node=0 00:03:24.104 19:58:21 -- setup/common.sh@19 -- # local var val 00:03:24.104 19:58:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.104 19:58:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.104 19:58:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.104 19:58:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.104 19:58:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.104 19:58:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 61345664 kB' 'MemUsed: 4410316 kB' 'SwapCached: 0 kB' 'Active: 1787016 kB' 'Inactive: 84312 kB' 'Active(anon): 1427160 kB' 'Inactive(anon): 0 kB' 'Active(file): 359856 kB' 'Inactive(file): 84312 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1826416 kB' 'Mapped: 32920 kB' 'AnonPages: 53968 kB' 'Shmem: 1382248 kB' 'KernelStack: 10504 kB' 'PageTables: 2572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121012 kB' 'Slab: 440828 kB' 'SReclaimable: 121012 kB' 'SUnreclaim: 319816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.104 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.104 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@33 -- # echo 0 00:03:24.105 19:58:21 -- setup/common.sh@33 -- # return 0 00:03:24.105 19:58:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.105 19:58:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.105 19:58:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.105 19:58:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.105 19:58:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.105 19:58:21 -- setup/common.sh@18 -- # local node=1 00:03:24.105 19:58:21 -- setup/common.sh@19 -- # local var val 00:03:24.105 19:58:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.105 19:58:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.105 19:58:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.105 19:58:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.105 19:58:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.105 19:58:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681976 kB' 'MemFree: 48408768 kB' 'MemUsed: 12273208 kB' 'SwapCached: 0 kB' 'Active: 6066900 kB' 'Inactive: 3354648 kB' 'Active(anon): 5448584 kB' 'Inactive(anon): 0 kB' 'Active(file): 618316 kB' 'Inactive(file): 3354648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8892384 kB' 'Mapped: 144996 kB' 'AnonPages: 529408 kB' 'Shmem: 4919420 kB' 'KernelStack: 13880 kB' 'PageTables: 5860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164436 kB' 'Slab: 482016 kB' 'SReclaimable: 164436 kB' 'SUnreclaim: 317580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.105 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.105 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # continue 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.106 19:58:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.106 19:58:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.106 19:58:21 -- setup/common.sh@33 -- # echo 0 00:03:24.106 19:58:21 -- setup/common.sh@33 -- # return 0 00:03:24.106 19:58:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.106 19:58:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.106 19:58:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.106 19:58:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:24.106 node0=512 expecting 513 00:03:24.106 19:58:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.106 19:58:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.106 19:58:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.106 19:58:21 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:24.106 node1=513 expecting 512 00:03:24.106 19:58:21 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:24.106 00:03:24.106 real 0m2.819s 00:03:24.106 user 0m0.945s 00:03:24.106 sys 0m1.621s 00:03:24.106 19:58:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.106 19:58:21 -- common/autotest_common.sh@10 -- # set +x 00:03:24.106 ************************************ 00:03:24.106 END TEST odd_alloc 00:03:24.106 ************************************ 00:03:24.106 19:58:21 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:24.106 19:58:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:24.106 19:58:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:24.106 19:58:21 -- common/autotest_common.sh@10 -- # set +x 00:03:24.106 ************************************ 00:03:24.106 START TEST custom_alloc 00:03:24.106 ************************************ 00:03:24.106 19:58:21 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:24.106 19:58:21 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:24.106 19:58:21 -- setup/hugepages.sh@169 -- # local node 00:03:24.106 19:58:21 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:24.106 19:58:21 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:24.106 19:58:21 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:24.106 19:58:21 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:24.106 19:58:21 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:24.106 19:58:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:24.106 19:58:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.106 19:58:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.106 19:58:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.106 19:58:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:24.106 19:58:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.106 19:58:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.106 19:58:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.106 19:58:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.106 19:58:21 -- setup/hugepages.sh@83 -- # : 256 00:03:24.106 19:58:21 -- setup/hugepages.sh@84 -- # : 1 00:03:24.106 19:58:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.106 19:58:21 -- setup/hugepages.sh@83 -- # : 0 00:03:24.106 19:58:21 -- setup/hugepages.sh@84 -- # : 0 00:03:24.106 19:58:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:24.106 19:58:21 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:24.106 19:58:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.106 19:58:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.106 19:58:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.106 19:58:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.106 19:58:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.106 19:58:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.106 19:58:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.106 19:58:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.106 19:58:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.106 19:58:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.106 19:58:21 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.106 19:58:21 -- setup/hugepages.sh@78 -- # return 0 00:03:24.106 19:58:21 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:24.106 19:58:21 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.106 19:58:21 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.106 19:58:21 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.106 19:58:21 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.106 19:58:21 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:24.106 19:58:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.106 19:58:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.106 19:58:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.106 19:58:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.106 19:58:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.106 19:58:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.106 19:58:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:24.106 19:58:21 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.106 19:58:21 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.106 19:58:21 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.106 19:58:21 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:24.106 19:58:21 -- setup/hugepages.sh@78 -- # return 0 00:03:24.106 19:58:21 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:24.106 19:58:21 -- setup/hugepages.sh@187 -- # setup output 00:03:24.106 19:58:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.106 19:58:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:26.653 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:26.653 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:03:26.653 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:26.653 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:26.653 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:26.653 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:26.653 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:26.653 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:26.653 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:26.653 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:26.653 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:26.653 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:26.653 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:26.653 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:26.653 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:26.653 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:26.653 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:26.653 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:03:26.920 19:58:24 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:26.920 19:58:24 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:26.920 19:58:24 -- setup/hugepages.sh@89 -- # local node 00:03:26.920 19:58:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.920 19:58:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.920 19:58:24 -- setup/hugepages.sh@92 -- # local surp 00:03:26.920 19:58:24 -- setup/hugepages.sh@93 -- # local resv 00:03:26.920 19:58:24 -- setup/hugepages.sh@94 -- # local anon 00:03:26.920 19:58:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.920 19:58:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.920 19:58:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.920 19:58:24 -- setup/common.sh@18 -- # local node= 00:03:26.920 19:58:24 -- setup/common.sh@19 -- # local var val 00:03:26.920 19:58:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.920 19:58:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.920 19:58:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.920 19:58:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.920 19:58:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.920 19:58:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.920 19:58:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 108712684 kB' 'MemAvailable: 112447908 kB' 'Buffers: 2696 kB' 'Cached: 10716160 kB' 'SwapCached: 0 kB' 'Active: 7860844 kB' 'Inactive: 3438960 kB' 'Active(anon): 6882672 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590356 kB' 'Mapped: 178428 kB' 'Shmem: 6301724 kB' 'KReclaimable: 285448 kB' 'Slab: 923056 kB' 'SReclaimable: 285448 kB' 'SUnreclaim: 637608 kB' 'KernelStack: 24480 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034716 kB' 'Committed_AS: 8422440 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228340 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.920 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.920 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.921 19:58:24 -- setup/common.sh@33 -- # echo 0 00:03:26.921 19:58:24 -- setup/common.sh@33 -- # return 0 00:03:26.921 19:58:24 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.921 19:58:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.921 19:58:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.921 19:58:24 -- setup/common.sh@18 -- # local node= 00:03:26.921 19:58:24 -- setup/common.sh@19 -- # local var val 00:03:26.921 19:58:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.921 19:58:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.921 19:58:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.921 19:58:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.921 19:58:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.921 19:58:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 108714700 kB' 'MemAvailable: 112449924 kB' 'Buffers: 2696 kB' 'Cached: 10716160 kB' 'SwapCached: 0 kB' 'Active: 7860724 kB' 'Inactive: 3438960 kB' 'Active(anon): 6882552 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590244 kB' 'Mapped: 178856 kB' 'Shmem: 6301724 kB' 'KReclaimable: 285448 kB' 'Slab: 923056 kB' 'SReclaimable: 285448 kB' 'SUnreclaim: 637608 kB' 'KernelStack: 24496 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034716 kB' 'Committed_AS: 8422452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228324 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.921 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.921 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.922 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.922 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.923 19:58:24 -- setup/common.sh@33 -- # echo 0 00:03:26.923 19:58:24 -- setup/common.sh@33 -- # return 0 00:03:26.923 19:58:24 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.923 19:58:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.923 19:58:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.923 19:58:24 -- setup/common.sh@18 -- # local node= 00:03:26.923 19:58:24 -- setup/common.sh@19 -- # local var val 00:03:26.923 19:58:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.923 19:58:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.923 19:58:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.923 19:58:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.923 19:58:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.923 19:58:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 108715320 kB' 'MemAvailable: 112450544 kB' 'Buffers: 2696 kB' 'Cached: 10716160 kB' 'SwapCached: 0 kB' 'Active: 7854712 kB' 'Inactive: 3438960 kB' 'Active(anon): 6876540 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584132 kB' 'Mapped: 177920 kB' 'Shmem: 6301724 kB' 'KReclaimable: 285448 kB' 'Slab: 923080 kB' 'SReclaimable: 285448 kB' 'SUnreclaim: 637632 kB' 'KernelStack: 24512 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034716 kB' 'Committed_AS: 8414864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228336 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.923 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.923 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.924 19:58:24 -- setup/common.sh@33 -- # echo 0 00:03:26.924 19:58:24 -- setup/common.sh@33 -- # return 0 00:03:26.924 19:58:24 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.924 19:58:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:26.924 nr_hugepages=1536 00:03:26.924 19:58:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.924 resv_hugepages=0 00:03:26.924 19:58:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.924 surplus_hugepages=0 00:03:26.924 19:58:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.924 anon_hugepages=0 00:03:26.924 19:58:24 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.924 19:58:24 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:26.924 19:58:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.924 19:58:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.924 19:58:24 -- setup/common.sh@18 -- # local node= 00:03:26.924 19:58:24 -- setup/common.sh@19 -- # local var val 00:03:26.924 19:58:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.924 19:58:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.924 19:58:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.924 19:58:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.924 19:58:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.924 19:58:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 108715908 kB' 'MemAvailable: 112451132 kB' 'Buffers: 2696 kB' 'Cached: 10716212 kB' 'SwapCached: 0 kB' 'Active: 7854176 kB' 'Inactive: 3438960 kB' 'Active(anon): 6876004 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583468 kB' 'Mapped: 177928 kB' 'Shmem: 6301776 kB' 'KReclaimable: 285448 kB' 'Slab: 923080 kB' 'SReclaimable: 285448 kB' 'SUnreclaim: 637632 kB' 'KernelStack: 24448 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034716 kB' 'Committed_AS: 8414880 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228320 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.924 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.924 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.925 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.925 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.926 19:58:24 -- setup/common.sh@33 -- # echo 1536 00:03:26.926 19:58:24 -- setup/common.sh@33 -- # return 0 00:03:26.926 19:58:24 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.926 19:58:24 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.926 19:58:24 -- setup/hugepages.sh@27 -- # local node 00:03:26.926 19:58:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.926 19:58:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.926 19:58:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.926 19:58:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.926 19:58:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.926 19:58:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.926 19:58:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.926 19:58:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.926 19:58:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.926 19:58:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.926 19:58:24 -- setup/common.sh@18 -- # local node=0 00:03:26.926 19:58:24 -- setup/common.sh@19 -- # local var val 00:03:26.926 19:58:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.926 19:58:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.926 19:58:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.926 19:58:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.926 19:58:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.926 19:58:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 61348144 kB' 'MemUsed: 4407836 kB' 'SwapCached: 0 kB' 'Active: 1787052 kB' 'Inactive: 84312 kB' 'Active(anon): 1427196 kB' 'Inactive(anon): 0 kB' 'Active(file): 359856 kB' 'Inactive(file): 84312 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1826420 kB' 'Mapped: 32920 kB' 'AnonPages: 54000 kB' 'Shmem: 1382252 kB' 'KernelStack: 10504 kB' 'PageTables: 2568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121012 kB' 'Slab: 441160 kB' 'SReclaimable: 121012 kB' 'SUnreclaim: 320148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.926 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.926 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@33 -- # echo 0 00:03:26.927 19:58:24 -- setup/common.sh@33 -- # return 0 00:03:26.927 19:58:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.927 19:58:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.927 19:58:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.927 19:58:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.927 19:58:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.927 19:58:24 -- setup/common.sh@18 -- # local node=1 00:03:26.927 19:58:24 -- setup/common.sh@19 -- # local var val 00:03:26.927 19:58:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.927 19:58:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.927 19:58:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.927 19:58:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.927 19:58:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.927 19:58:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681976 kB' 'MemFree: 47367764 kB' 'MemUsed: 13314212 kB' 'SwapCached: 0 kB' 'Active: 6067520 kB' 'Inactive: 3354648 kB' 'Active(anon): 5449204 kB' 'Inactive(anon): 0 kB' 'Active(file): 618316 kB' 'Inactive(file): 3354648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8892504 kB' 'Mapped: 145008 kB' 'AnonPages: 529844 kB' 'Shmem: 4919540 kB' 'KernelStack: 13960 kB' 'PageTables: 5916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164436 kB' 'Slab: 481920 kB' 'SReclaimable: 164436 kB' 'SUnreclaim: 317484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.927 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.927 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # continue 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.928 19:58:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.928 19:58:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.928 19:58:24 -- setup/common.sh@33 -- # echo 0 00:03:26.928 19:58:24 -- setup/common.sh@33 -- # return 0 00:03:26.928 19:58:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.928 19:58:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.928 19:58:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.928 19:58:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.928 19:58:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.928 node0=512 expecting 512 00:03:26.928 19:58:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.928 19:58:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.928 19:58:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.928 19:58:24 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:26.928 node1=1024 expecting 1024 00:03:26.928 19:58:24 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:26.928 00:03:26.928 real 0m2.909s 00:03:26.928 user 0m0.982s 00:03:26.928 sys 0m1.690s 00:03:26.928 19:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.928 19:58:24 -- common/autotest_common.sh@10 -- # set +x 00:03:26.928 ************************************ 00:03:26.928 END TEST custom_alloc 00:03:26.928 ************************************ 00:03:26.928 19:58:24 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:26.928 19:58:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:26.928 19:58:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.928 19:58:24 -- common/autotest_common.sh@10 -- # set +x 00:03:26.928 ************************************ 00:03:26.928 START TEST no_shrink_alloc 00:03:26.928 ************************************ 00:03:26.928 19:58:24 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:26.928 19:58:24 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:26.928 19:58:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.928 19:58:24 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.928 19:58:24 -- setup/hugepages.sh@51 -- # shift 00:03:26.928 19:58:24 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:26.928 19:58:24 -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.928 19:58:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.928 19:58:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.928 19:58:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.928 19:58:24 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:26.928 19:58:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.928 19:58:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.928 19:58:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.928 19:58:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.928 19:58:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.928 19:58:24 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.928 19:58:24 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.928 19:58:24 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.928 19:58:24 -- setup/hugepages.sh@73 -- # return 0 00:03:26.928 19:58:24 -- setup/hugepages.sh@198 -- # setup output 00:03:26.928 19:58:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.928 19:58:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:30.232 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:30.232 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:03:30.232 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:30.232 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:30.232 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:30.232 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:30.232 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:30.232 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:30.232 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:30.232 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:30.232 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:30.232 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:30.232 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:30.232 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:30.232 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:30.232 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:30.232 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:30.232 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:03:30.232 19:58:27 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:30.232 19:58:27 -- setup/hugepages.sh@89 -- # local node 00:03:30.232 19:58:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.232 19:58:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.232 19:58:27 -- setup/hugepages.sh@92 -- # local surp 00:03:30.232 19:58:27 -- setup/hugepages.sh@93 -- # local resv 00:03:30.232 19:58:27 -- setup/hugepages.sh@94 -- # local anon 00:03:30.232 19:58:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.232 19:58:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.232 19:58:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.232 19:58:27 -- setup/common.sh@18 -- # local node= 00:03:30.232 19:58:27 -- setup/common.sh@19 -- # local var val 00:03:30.232 19:58:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.232 19:58:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.232 19:58:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.232 19:58:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.232 19:58:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.232 19:58:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.232 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.232 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109783772 kB' 'MemAvailable: 113518980 kB' 'Buffers: 2696 kB' 'Cached: 10716304 kB' 'SwapCached: 0 kB' 'Active: 7856052 kB' 'Inactive: 3438960 kB' 'Active(anon): 6877880 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585180 kB' 'Mapped: 177944 kB' 'Shmem: 6301868 kB' 'KReclaimable: 285416 kB' 'Slab: 923256 kB' 'SReclaimable: 285416 kB' 'SUnreclaim: 637840 kB' 'KernelStack: 24480 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8415500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228448 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.233 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.233 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.234 19:58:27 -- setup/common.sh@33 -- # echo 0 00:03:30.234 19:58:27 -- setup/common.sh@33 -- # return 0 00:03:30.234 19:58:27 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.234 19:58:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.234 19:58:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.234 19:58:27 -- setup/common.sh@18 -- # local node= 00:03:30.234 19:58:27 -- setup/common.sh@19 -- # local var val 00:03:30.234 19:58:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.234 19:58:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.234 19:58:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.234 19:58:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.234 19:58:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.234 19:58:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109783744 kB' 'MemAvailable: 113518952 kB' 'Buffers: 2696 kB' 'Cached: 10716304 kB' 'SwapCached: 0 kB' 'Active: 7856304 kB' 'Inactive: 3438960 kB' 'Active(anon): 6878132 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585444 kB' 'Mapped: 177944 kB' 'Shmem: 6301868 kB' 'KReclaimable: 285416 kB' 'Slab: 923256 kB' 'SReclaimable: 285416 kB' 'SUnreclaim: 637840 kB' 'KernelStack: 24480 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8415512 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228416 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.234 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.234 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.235 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.235 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.235 19:58:27 -- setup/common.sh@33 -- # echo 0 00:03:30.235 19:58:27 -- setup/common.sh@33 -- # return 0 00:03:30.235 19:58:27 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.235 19:58:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.235 19:58:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.235 19:58:27 -- setup/common.sh@18 -- # local node= 00:03:30.235 19:58:27 -- setup/common.sh@19 -- # local var val 00:03:30.236 19:58:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.236 19:58:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.236 19:58:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.236 19:58:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.236 19:58:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.236 19:58:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109785052 kB' 'MemAvailable: 113520260 kB' 'Buffers: 2696 kB' 'Cached: 10716316 kB' 'SwapCached: 0 kB' 'Active: 7855708 kB' 'Inactive: 3438960 kB' 'Active(anon): 6877536 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584796 kB' 'Mapped: 177940 kB' 'Shmem: 6301880 kB' 'KReclaimable: 285416 kB' 'Slab: 923280 kB' 'SReclaimable: 285416 kB' 'SUnreclaim: 637864 kB' 'KernelStack: 24544 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8415524 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228432 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.236 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.236 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.237 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.237 19:58:27 -- setup/common.sh@33 -- # echo 0 00:03:30.237 19:58:27 -- setup/common.sh@33 -- # return 0 00:03:30.237 19:58:27 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.237 19:58:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.237 nr_hugepages=1024 00:03:30.237 19:58:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.237 resv_hugepages=0 00:03:30.237 19:58:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.237 surplus_hugepages=0 00:03:30.237 19:58:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.237 anon_hugepages=0 00:03:30.237 19:58:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.237 19:58:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.237 19:58:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.237 19:58:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.237 19:58:27 -- setup/common.sh@18 -- # local node= 00:03:30.237 19:58:27 -- setup/common.sh@19 -- # local var val 00:03:30.237 19:58:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.237 19:58:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.237 19:58:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.237 19:58:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.237 19:58:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.237 19:58:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.237 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109787068 kB' 'MemAvailable: 113522276 kB' 'Buffers: 2696 kB' 'Cached: 10716320 kB' 'SwapCached: 0 kB' 'Active: 7855064 kB' 'Inactive: 3438960 kB' 'Active(anon): 6876892 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584148 kB' 'Mapped: 177940 kB' 'Shmem: 6301884 kB' 'KReclaimable: 285416 kB' 'Slab: 923280 kB' 'SReclaimable: 285416 kB' 'SUnreclaim: 637864 kB' 'KernelStack: 24512 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8415540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228432 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.238 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.238 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.239 19:58:27 -- setup/common.sh@33 -- # echo 1024 00:03:30.239 19:58:27 -- setup/common.sh@33 -- # return 0 00:03:30.239 19:58:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.239 19:58:27 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.239 19:58:27 -- setup/hugepages.sh@27 -- # local node 00:03:30.239 19:58:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.239 19:58:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.239 19:58:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.239 19:58:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.239 19:58:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.239 19:58:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.239 19:58:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.239 19:58:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.239 19:58:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.239 19:58:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.239 19:58:27 -- setup/common.sh@18 -- # local node=0 00:03:30.239 19:58:27 -- setup/common.sh@19 -- # local var val 00:03:30.239 19:58:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.239 19:58:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.239 19:58:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.239 19:58:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.239 19:58:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.239 19:58:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60319364 kB' 'MemUsed: 5436616 kB' 'SwapCached: 0 kB' 'Active: 1787780 kB' 'Inactive: 84312 kB' 'Active(anon): 1427924 kB' 'Inactive(anon): 0 kB' 'Active(file): 359856 kB' 'Inactive(file): 84312 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1826476 kB' 'Mapped: 32920 kB' 'AnonPages: 54668 kB' 'Shmem: 1382308 kB' 'KernelStack: 10552 kB' 'PageTables: 2672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121012 kB' 'Slab: 441080 kB' 'SReclaimable: 121012 kB' 'SUnreclaim: 320068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.239 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.239 19:58:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # continue 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.240 19:58:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.240 19:58:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.240 19:58:27 -- setup/common.sh@33 -- # echo 0 00:03:30.240 19:58:27 -- setup/common.sh@33 -- # return 0 00:03:30.240 19:58:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.240 19:58:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.240 19:58:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.240 19:58:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.240 19:58:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.240 node0=1024 expecting 1024 00:03:30.241 19:58:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.241 19:58:27 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:30.241 19:58:27 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:30.241 19:58:27 -- setup/hugepages.sh@202 -- # setup output 00:03:30.241 19:58:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.241 19:58:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:32.785 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.785 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:03:32.785 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.785 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.785 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.785 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.785 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.785 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.785 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.785 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.785 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.785 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.785 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.785 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.785 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.785 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.785 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.785 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:03:33.049 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:33.049 19:58:30 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:33.049 19:58:30 -- setup/hugepages.sh@89 -- # local node 00:03:33.049 19:58:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.049 19:58:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.049 19:58:30 -- setup/hugepages.sh@92 -- # local surp 00:03:33.049 19:58:30 -- setup/hugepages.sh@93 -- # local resv 00:03:33.049 19:58:30 -- setup/hugepages.sh@94 -- # local anon 00:03:33.049 19:58:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.049 19:58:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.049 19:58:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.049 19:58:30 -- setup/common.sh@18 -- # local node= 00:03:33.049 19:58:30 -- setup/common.sh@19 -- # local var val 00:03:33.049 19:58:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.049 19:58:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.049 19:58:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.049 19:58:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.049 19:58:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.049 19:58:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.049 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.049 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109818620 kB' 'MemAvailable: 113553828 kB' 'Buffers: 2696 kB' 'Cached: 10716432 kB' 'SwapCached: 0 kB' 'Active: 7855980 kB' 'Inactive: 3438960 kB' 'Active(anon): 6877808 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585068 kB' 'Mapped: 177948 kB' 'Shmem: 6301996 kB' 'KReclaimable: 285416 kB' 'Slab: 923740 kB' 'SReclaimable: 285416 kB' 'SUnreclaim: 638324 kB' 'KernelStack: 24512 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8416280 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228496 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.050 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.050 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.051 19:58:30 -- setup/common.sh@33 -- # echo 0 00:03:33.051 19:58:30 -- setup/common.sh@33 -- # return 0 00:03:33.051 19:58:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:33.051 19:58:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.051 19:58:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.051 19:58:30 -- setup/common.sh@18 -- # local node= 00:03:33.051 19:58:30 -- setup/common.sh@19 -- # local var val 00:03:33.051 19:58:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.051 19:58:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.051 19:58:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.051 19:58:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.051 19:58:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.051 19:58:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109820384 kB' 'MemAvailable: 113555592 kB' 'Buffers: 2696 kB' 'Cached: 10716432 kB' 'SwapCached: 0 kB' 'Active: 7856228 kB' 'Inactive: 3438960 kB' 'Active(anon): 6878056 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585348 kB' 'Mapped: 177948 kB' 'Shmem: 6301996 kB' 'KReclaimable: 285416 kB' 'Slab: 923732 kB' 'SReclaimable: 285416 kB' 'SUnreclaim: 638316 kB' 'KernelStack: 24496 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8416292 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228480 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.051 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.051 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.052 19:58:30 -- setup/common.sh@33 -- # echo 0 00:03:33.052 19:58:30 -- setup/common.sh@33 -- # return 0 00:03:33.052 19:58:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:33.052 19:58:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.052 19:58:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.052 19:58:30 -- setup/common.sh@18 -- # local node= 00:03:33.052 19:58:30 -- setup/common.sh@19 -- # local var val 00:03:33.052 19:58:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.052 19:58:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.052 19:58:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.052 19:58:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.052 19:58:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.052 19:58:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109820980 kB' 'MemAvailable: 113556188 kB' 'Buffers: 2696 kB' 'Cached: 10716444 kB' 'SwapCached: 0 kB' 'Active: 7856460 kB' 'Inactive: 3438960 kB' 'Active(anon): 6878288 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585580 kB' 'Mapped: 177948 kB' 'Shmem: 6302008 kB' 'KReclaimable: 285416 kB' 'Slab: 923756 kB' 'SReclaimable: 285416 kB' 'SUnreclaim: 638340 kB' 'KernelStack: 24496 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8416304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228480 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.052 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.052 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.053 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.053 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.053 19:58:30 -- setup/common.sh@33 -- # echo 0 00:03:33.053 19:58:30 -- setup/common.sh@33 -- # return 0 00:03:33.053 19:58:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:33.053 19:58:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.053 nr_hugepages=1024 00:03:33.053 19:58:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.053 resv_hugepages=0 00:03:33.053 19:58:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.053 surplus_hugepages=0 00:03:33.053 19:58:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.053 anon_hugepages=0 00:03:33.053 19:58:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.054 19:58:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.054 19:58:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.054 19:58:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.054 19:58:30 -- setup/common.sh@18 -- # local node= 00:03:33.054 19:58:30 -- setup/common.sh@19 -- # local var val 00:03:33.054 19:58:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.054 19:58:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.054 19:58:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.054 19:58:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.054 19:58:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.054 19:58:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437956 kB' 'MemFree: 109822492 kB' 'MemAvailable: 113557700 kB' 'Buffers: 2696 kB' 'Cached: 10716456 kB' 'SwapCached: 0 kB' 'Active: 7856248 kB' 'Inactive: 3438960 kB' 'Active(anon): 6878076 kB' 'Inactive(anon): 0 kB' 'Active(file): 978172 kB' 'Inactive(file): 3438960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585356 kB' 'Mapped: 177948 kB' 'Shmem: 6302020 kB' 'KReclaimable: 285416 kB' 'Slab: 923756 kB' 'SReclaimable: 285416 kB' 'SUnreclaim: 638340 kB' 'KernelStack: 24512 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559004 kB' 'Committed_AS: 8416320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 228480 kB' 'VmallocChunk: 0 kB' 'Percpu: 95744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3141696 kB' 'DirectMap2M: 17606656 kB' 'DirectMap1G: 115343360 kB' 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.054 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.054 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.055 19:58:30 -- setup/common.sh@33 -- # echo 1024 00:03:33.055 19:58:30 -- setup/common.sh@33 -- # return 0 00:03:33.055 19:58:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.055 19:58:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.055 19:58:30 -- setup/hugepages.sh@27 -- # local node 00:03:33.055 19:58:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.055 19:58:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.055 19:58:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.055 19:58:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.055 19:58:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.055 19:58:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.055 19:58:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.055 19:58:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.055 19:58:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.055 19:58:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.055 19:58:30 -- setup/common.sh@18 -- # local node=0 00:03:33.055 19:58:30 -- setup/common.sh@19 -- # local var val 00:03:33.055 19:58:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.055 19:58:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.055 19:58:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.055 19:58:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.055 19:58:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.055 19:58:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 60354332 kB' 'MemUsed: 5401648 kB' 'SwapCached: 0 kB' 'Active: 1787560 kB' 'Inactive: 84312 kB' 'Active(anon): 1427704 kB' 'Inactive(anon): 0 kB' 'Active(file): 359856 kB' 'Inactive(file): 84312 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1826524 kB' 'Mapped: 32920 kB' 'AnonPages: 54600 kB' 'Shmem: 1382356 kB' 'KernelStack: 10584 kB' 'PageTables: 2772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121012 kB' 'Slab: 441108 kB' 'SReclaimable: 121012 kB' 'SUnreclaim: 320096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.055 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.055 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # continue 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.056 19:58:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.056 19:58:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.056 19:58:30 -- setup/common.sh@33 -- # echo 0 00:03:33.056 19:58:30 -- setup/common.sh@33 -- # return 0 00:03:33.056 19:58:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.056 19:58:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.056 19:58:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.056 19:58:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.056 19:58:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.056 node0=1024 expecting 1024 00:03:33.056 19:58:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.056 00:03:33.056 real 0m6.076s 00:03:33.056 user 0m1.937s 00:03:33.056 sys 0m3.687s 00:03:33.056 19:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.056 19:58:30 -- common/autotest_common.sh@10 -- # set +x 00:03:33.056 ************************************ 00:03:33.056 END TEST no_shrink_alloc 00:03:33.056 ************************************ 00:03:33.056 19:58:30 -- setup/hugepages.sh@217 -- # clear_hp 00:03:33.056 19:58:30 -- setup/hugepages.sh@37 -- # local node hp 00:03:33.056 19:58:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.056 19:58:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.056 19:58:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.056 19:58:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.056 19:58:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.056 19:58:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.056 19:58:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.056 19:58:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.056 19:58:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.056 19:58:30 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.056 19:58:30 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.056 19:58:30 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.056 00:03:33.056 real 0m21.972s 00:03:33.056 user 0m6.955s 00:03:33.056 sys 0m12.444s 00:03:33.056 19:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.056 19:58:30 -- common/autotest_common.sh@10 -- # set +x 00:03:33.056 ************************************ 00:03:33.056 END TEST hugepages 00:03:33.056 ************************************ 00:03:33.317 19:58:30 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:33.317 19:58:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.317 19:58:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.317 19:58:30 -- common/autotest_common.sh@10 -- # set +x 00:03:33.317 ************************************ 00:03:33.317 START TEST driver 00:03:33.317 ************************************ 00:03:33.318 19:58:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:33.318 * Looking for test storage... 00:03:33.318 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:33.318 19:58:31 -- setup/driver.sh@68 -- # setup reset 00:03:33.318 19:58:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.318 19:58:31 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.583 19:58:35 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:37.583 19:58:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:37.583 19:58:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:37.583 19:58:35 -- common/autotest_common.sh@10 -- # set +x 00:03:37.583 ************************************ 00:03:37.583 START TEST guess_driver 00:03:37.583 ************************************ 00:03:37.583 19:58:35 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:37.583 19:58:35 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:37.583 19:58:35 -- setup/driver.sh@47 -- # local fail=0 00:03:37.583 19:58:35 -- setup/driver.sh@49 -- # pick_driver 00:03:37.583 19:58:35 -- setup/driver.sh@36 -- # vfio 00:03:37.583 19:58:35 -- setup/driver.sh@21 -- # local iommu_grups 00:03:37.583 19:58:35 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:37.583 19:58:35 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:37.583 19:58:35 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:37.583 19:58:35 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:37.583 19:58:35 -- setup/driver.sh@29 -- # (( 335 > 0 )) 00:03:37.583 19:58:35 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:37.583 19:58:35 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:37.583 19:58:35 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:37.583 19:58:35 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:37.583 19:58:35 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:37.583 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:37.583 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:37.583 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:37.583 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:37.583 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:37.583 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:37.583 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:37.584 19:58:35 -- setup/driver.sh@30 -- # return 0 00:03:37.584 19:58:35 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:37.584 19:58:35 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:37.584 19:58:35 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:37.584 19:58:35 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:37.584 Looking for driver=vfio-pci 00:03:37.584 19:58:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.584 19:58:35 -- setup/driver.sh@45 -- # setup output config 00:03:37.584 19:58:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.584 19:58:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.913 19:58:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.913 19:58:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.913 19:58:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.485 19:58:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.485 19:58:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.485 19:58:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.485 19:58:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.485 19:58:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.485 19:58:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.746 19:58:39 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:41.746 19:58:39 -- setup/driver.sh@65 -- # setup reset 00:03:41.746 19:58:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.746 19:58:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.949 00:03:45.949 real 0m8.362s 00:03:45.949 user 0m2.014s 00:03:45.949 sys 0m3.930s 00:03:45.949 19:58:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.949 19:58:43 -- common/autotest_common.sh@10 -- # set +x 00:03:45.949 ************************************ 00:03:45.949 END TEST guess_driver 00:03:45.949 ************************************ 00:03:45.949 00:03:45.949 real 0m12.761s 00:03:45.949 user 0m3.108s 00:03:45.949 sys 0m6.031s 00:03:45.949 19:58:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.949 19:58:43 -- common/autotest_common.sh@10 -- # set +x 00:03:45.949 ************************************ 00:03:45.949 END TEST driver 00:03:45.949 ************************************ 00:03:45.949 19:58:43 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:45.949 19:58:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.949 19:58:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.949 19:58:43 -- common/autotest_common.sh@10 -- # set +x 00:03:45.949 ************************************ 00:03:45.949 START TEST devices 00:03:45.950 ************************************ 00:03:45.950 19:58:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:45.950 * Looking for test storage... 00:03:45.950 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:45.950 19:58:43 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:45.950 19:58:43 -- setup/devices.sh@192 -- # setup reset 00:03:45.950 19:58:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.950 19:58:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.250 19:58:46 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:49.250 19:58:46 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:49.250 19:58:46 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:49.250 19:58:46 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:49.250 19:58:46 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:49.250 19:58:46 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:49.250 19:58:46 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:49.250 19:58:46 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.250 19:58:46 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:49.250 19:58:46 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:49.250 19:58:46 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:49.250 19:58:46 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:49.250 19:58:46 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:49.250 19:58:46 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:49.250 19:58:46 -- setup/devices.sh@196 -- # blocks=() 00:03:49.250 19:58:46 -- setup/devices.sh@196 -- # declare -a blocks 00:03:49.250 19:58:46 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:49.250 19:58:46 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:49.250 19:58:46 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:49.250 19:58:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.250 19:58:46 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:49.250 19:58:46 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.250 19:58:46 -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:03:49.250 19:58:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:03:49.250 19:58:46 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:49.250 19:58:46 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:49.250 19:58:46 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:49.250 No valid GPT data, bailing 00:03:49.250 19:58:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.250 19:58:46 -- scripts/common.sh@393 -- # pt= 00:03:49.250 19:58:46 -- scripts/common.sh@394 -- # return 1 00:03:49.250 19:58:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:49.250 19:58:46 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:49.250 19:58:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:49.250 19:58:46 -- setup/common.sh@80 -- # echo 960197124096 00:03:49.250 19:58:46 -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:03:49.250 19:58:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.250 19:58:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:03:49.250 19:58:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.250 19:58:46 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:49.250 19:58:46 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:49.250 19:58:46 -- setup/devices.sh@202 -- # pci=0000:03:00.0 00:03:49.250 19:58:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:03:49.250 19:58:46 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:49.250 19:58:46 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:49.250 19:58:46 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:49.250 No valid GPT data, bailing 00:03:49.250 19:58:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:49.250 19:58:46 -- scripts/common.sh@393 -- # pt= 00:03:49.250 19:58:46 -- scripts/common.sh@394 -- # return 1 00:03:49.250 19:58:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:49.250 19:58:46 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:49.250 19:58:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:49.250 19:58:46 -- setup/common.sh@80 -- # echo 960197124096 00:03:49.250 19:58:46 -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:03:49.250 19:58:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.250 19:58:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:03:00.0 00:03:49.250 19:58:46 -- setup/devices.sh@209 -- # (( 2 > 0 )) 00:03:49.250 19:58:46 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:49.250 19:58:46 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:49.250 19:58:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:49.250 19:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:49.250 19:58:46 -- common/autotest_common.sh@10 -- # set +x 00:03:49.250 ************************************ 00:03:49.250 START TEST nvme_mount 00:03:49.250 ************************************ 00:03:49.251 19:58:46 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:49.251 19:58:46 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:49.251 19:58:46 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:49.251 19:58:46 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.251 19:58:46 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.251 19:58:46 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:49.251 19:58:46 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.251 19:58:46 -- setup/common.sh@40 -- # local part_no=1 00:03:49.251 19:58:46 -- setup/common.sh@41 -- # local size=1073741824 00:03:49.251 19:58:46 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.251 19:58:46 -- setup/common.sh@44 -- # parts=() 00:03:49.251 19:58:46 -- setup/common.sh@44 -- # local parts 00:03:49.251 19:58:46 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.251 19:58:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.251 19:58:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.251 19:58:46 -- setup/common.sh@46 -- # (( part++ )) 00:03:49.251 19:58:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.251 19:58:46 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.251 19:58:46 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.251 19:58:46 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:50.193 Creating new GPT entries in memory. 00:03:50.193 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:50.193 other utilities. 00:03:50.193 19:58:47 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:50.193 19:58:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.193 19:58:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.193 19:58:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.193 19:58:47 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:51.133 Creating new GPT entries in memory. 00:03:51.133 The operation has completed successfully. 00:03:51.133 19:58:49 -- setup/common.sh@57 -- # (( part++ )) 00:03:51.133 19:58:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.133 19:58:49 -- setup/common.sh@62 -- # wait 1297389 00:03:51.133 19:58:49 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.133 19:58:49 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:51.133 19:58:49 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.133 19:58:49 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:51.133 19:58:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:51.394 19:58:49 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.394 19:58:49 -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.394 19:58:49 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:51.394 19:58:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:51.394 19:58:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.394 19:58:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.394 19:58:49 -- setup/devices.sh@53 -- # local found=0 00:03:51.394 19:58:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.394 19:58:49 -- setup/devices.sh@56 -- # : 00:03:51.394 19:58:49 -- setup/devices.sh@59 -- # local pci status 00:03:51.394 19:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.394 19:58:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:51.394 19:58:49 -- setup/devices.sh@47 -- # setup output config 00:03:51.394 19:58:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.394 19:58:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:53.306 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.306 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:53.566 19:58:51 -- setup/devices.sh@63 -- # found=1 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.566 19:58:51 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:53.566 19:58:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.825 19:58:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.825 19:58:51 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:53.825 19:58:51 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.825 19:58:51 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.825 19:58:51 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.825 19:58:51 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:53.825 19:58:51 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.825 19:58:51 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.825 19:58:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.825 19:58:51 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:53.825 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:53.825 19:58:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.825 19:58:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.086 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:54.086 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:03:54.086 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:54.086 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:54.086 19:58:51 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:54.086 19:58:51 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:54.086 19:58:51 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.086 19:58:51 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:54.086 19:58:51 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:54.086 19:58:52 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.347 19:58:52 -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.347 19:58:52 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:54.347 19:58:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:54.347 19:58:52 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.347 19:58:52 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.347 19:58:52 -- setup/devices.sh@53 -- # local found=0 00:03:54.347 19:58:52 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.347 19:58:52 -- setup/devices.sh@56 -- # : 00:03:54.347 19:58:52 -- setup/devices.sh@59 -- # local pci status 00:03:54.347 19:58:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.347 19:58:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:54.347 19:58:52 -- setup/devices.sh@47 -- # setup output config 00:03:54.347 19:58:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.347 19:58:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:56.259 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.259 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:56.519 19:58:54 -- setup/devices.sh@63 -- # found=1 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.519 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.519 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.520 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.520 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.520 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.520 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.520 19:58:54 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:56.520 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.780 19:58:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.780 19:58:54 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.780 19:58:54 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.780 19:58:54 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.780 19:58:54 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.780 19:58:54 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.780 19:58:54 -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:03:56.780 19:58:54 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:56.780 19:58:54 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:56.780 19:58:54 -- setup/devices.sh@50 -- # local mount_point= 00:03:56.780 19:58:54 -- setup/devices.sh@51 -- # local test_file= 00:03:56.780 19:58:54 -- setup/devices.sh@53 -- # local found=0 00:03:56.780 19:58:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:56.780 19:58:54 -- setup/devices.sh@59 -- # local pci status 00:03:56.780 19:58:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.780 19:58:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:56.780 19:58:54 -- setup/devices.sh@47 -- # setup output config 00:03:56.780 19:58:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.780 19:58:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:59.323 19:58:56 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.323 19:58:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.323 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.323 19:58:57 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:59.323 19:58:57 -- setup/devices.sh@63 -- # found=1 00:03:59.323 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.323 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.323 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.323 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.323 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.323 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.323 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.323 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.323 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.323 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.323 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.323 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.323 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.324 19:58:57 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:59.324 19:58:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.583 19:58:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.583 19:58:57 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.583 19:58:57 -- setup/devices.sh@68 -- # return 0 00:03:59.583 19:58:57 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:59.583 19:58:57 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.583 19:58:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.583 19:58:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.583 19:58:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.583 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.583 00:03:59.583 real 0m10.524s 00:03:59.583 user 0m2.687s 00:03:59.583 sys 0m4.908s 00:03:59.583 19:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.583 19:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.583 ************************************ 00:03:59.583 END TEST nvme_mount 00:03:59.583 ************************************ 00:03:59.843 19:58:57 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:59.843 19:58:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.843 19:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.843 19:58:57 -- common/autotest_common.sh@10 -- # set +x 00:03:59.843 ************************************ 00:03:59.843 START TEST dm_mount 00:03:59.843 ************************************ 00:03:59.843 19:58:57 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:59.843 19:58:57 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:59.843 19:58:57 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:59.843 19:58:57 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:59.843 19:58:57 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:59.843 19:58:57 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:59.843 19:58:57 -- setup/common.sh@40 -- # local part_no=2 00:03:59.843 19:58:57 -- setup/common.sh@41 -- # local size=1073741824 00:03:59.843 19:58:57 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.843 19:58:57 -- setup/common.sh@44 -- # parts=() 00:03:59.843 19:58:57 -- setup/common.sh@44 -- # local parts 00:03:59.843 19:58:57 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.843 19:58:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.843 19:58:57 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.843 19:58:57 -- setup/common.sh@46 -- # (( part++ )) 00:03:59.843 19:58:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.843 19:58:57 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.843 19:58:57 -- setup/common.sh@46 -- # (( part++ )) 00:03:59.843 19:58:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.843 19:58:57 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:59.843 19:58:57 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:59.843 19:58:57 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:00.783 Creating new GPT entries in memory. 00:04:00.783 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.783 other utilities. 00:04:00.783 19:58:58 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.783 19:58:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.783 19:58:58 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.783 19:58:58 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.783 19:58:58 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:01.725 Creating new GPT entries in memory. 00:04:01.725 The operation has completed successfully. 00:04:01.725 19:58:59 -- setup/common.sh@57 -- # (( part++ )) 00:04:01.725 19:58:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.725 19:58:59 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.725 19:58:59 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.725 19:58:59 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:02.667 The operation has completed successfully. 00:04:02.667 19:59:00 -- setup/common.sh@57 -- # (( part++ )) 00:04:02.667 19:59:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.667 19:59:00 -- setup/common.sh@62 -- # wait 1302230 00:04:02.927 19:59:00 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:02.927 19:59:00 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:02.927 19:59:00 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.927 19:59:00 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:02.927 19:59:00 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:02.927 19:59:00 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.927 19:59:00 -- setup/devices.sh@161 -- # break 00:04:02.927 19:59:00 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.927 19:59:00 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:02.927 19:59:00 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:02.927 19:59:00 -- setup/devices.sh@166 -- # dm=dm-0 00:04:02.927 19:59:00 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:02.927 19:59:00 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:02.927 19:59:00 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:02.927 19:59:00 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:04:02.927 19:59:00 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:02.927 19:59:00 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.927 19:59:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:02.927 19:59:00 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:02.927 19:59:00 -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.927 19:59:00 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:02.927 19:59:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:02.928 19:59:00 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:02.928 19:59:00 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.928 19:59:00 -- setup/devices.sh@53 -- # local found=0 00:04:02.928 19:59:00 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.928 19:59:00 -- setup/devices.sh@56 -- # : 00:04:02.928 19:59:00 -- setup/devices.sh@59 -- # local pci status 00:04:02.928 19:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.928 19:59:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:02.928 19:59:00 -- setup/devices.sh@47 -- # setup output config 00:04:02.928 19:59:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.928 19:59:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:05.474 19:59:02 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:05.474 19:59:03 -- setup/devices.sh@63 -- # found=1 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.474 19:59:03 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:05.474 19:59:03 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:05.474 19:59:03 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:05.474 19:59:03 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:05.474 19:59:03 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:05.474 19:59:03 -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:05.474 19:59:03 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:05.474 19:59:03 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:05.474 19:59:03 -- setup/devices.sh@50 -- # local mount_point= 00:04:05.474 19:59:03 -- setup/devices.sh@51 -- # local test_file= 00:04:05.474 19:59:03 -- setup/devices.sh@53 -- # local found=0 00:04:05.474 19:59:03 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.474 19:59:03 -- setup/devices.sh@59 -- # local pci status 00:04:05.474 19:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.474 19:59:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:05.474 19:59:03 -- setup/devices.sh@47 -- # setup output config 00:04:05.474 19:59:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.474 19:59:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:08.072 19:59:05 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.072 19:59:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.333 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:08.334 19:59:06 -- setup/devices.sh@63 -- # found=1 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.334 19:59:06 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:08.334 19:59:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.594 19:59:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.594 19:59:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:08.594 19:59:06 -- setup/devices.sh@68 -- # return 0 00:04:08.594 19:59:06 -- setup/devices.sh@187 -- # cleanup_dm 00:04:08.594 19:59:06 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:08.594 19:59:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.594 19:59:06 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:08.594 19:59:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.594 19:59:06 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:08.594 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.594 19:59:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:08.594 19:59:06 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:08.594 00:04:08.594 real 0m8.814s 00:04:08.594 user 0m1.825s 00:04:08.594 sys 0m3.580s 00:04:08.594 19:59:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.594 19:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.594 ************************************ 00:04:08.594 END TEST dm_mount 00:04:08.594 ************************************ 00:04:08.594 19:59:06 -- setup/devices.sh@1 -- # cleanup 00:04:08.594 19:59:06 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:08.594 19:59:06 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.594 19:59:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.594 19:59:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:08.594 19:59:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.594 19:59:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.855 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:08.855 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:04:08.855 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.855 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:08.855 19:59:06 -- setup/devices.sh@12 -- # cleanup_dm 00:04:08.855 19:59:06 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:08.855 19:59:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.855 19:59:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.855 19:59:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:08.855 19:59:06 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.855 19:59:06 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:08.855 00:04:08.855 real 0m22.879s 00:04:08.855 user 0m5.667s 00:04:08.855 sys 0m10.515s 00:04:08.855 19:59:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.855 19:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.855 ************************************ 00:04:08.855 END TEST devices 00:04:08.855 ************************************ 00:04:08.855 00:04:08.855 real 1m18.889s 00:04:08.855 user 0m21.724s 00:04:08.855 sys 0m40.863s 00:04:08.855 19:59:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.855 19:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.855 ************************************ 00:04:08.855 END TEST setup.sh 00:04:08.855 ************************************ 00:04:08.855 19:59:06 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:04:12.152 Hugepages 00:04:12.152 node hugesize free / total 00:04:12.152 node0 1048576kB 0 / 0 00:04:12.153 node0 2048kB 2048 / 2048 00:04:12.153 node1 1048576kB 0 / 0 00:04:12.153 node1 2048kB 0 / 0 00:04:12.153 00:04:12.153 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:12.153 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:04:12.153 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:04:12.153 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:04:12.153 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:04:12.153 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:04:12.153 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:04:12.153 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:04:12.153 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:04:12.153 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:04:12.153 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:04:12.153 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:04:12.153 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:04:12.153 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:04:12.153 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:04:12.153 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:04:12.153 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:04:12.153 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:04:12.153 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:04:12.153 19:59:09 -- spdk/autotest.sh@141 -- # uname -s 00:04:12.153 19:59:09 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:12.153 19:59:09 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:12.153 19:59:09 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:04:14.696 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:14.696 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:14.696 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:14.696 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:04:14.696 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:14.696 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:04:14.696 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:14.696 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:04:14.957 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:14.957 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:04:14.957 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:04:14.957 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:04:14.957 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:14.957 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:04:14.957 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:15.218 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:04:15.479 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:04:15.739 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:04:16.000 19:59:13 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:16.942 19:59:14 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:16.942 19:59:14 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:16.942 19:59:14 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.942 19:59:14 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:16.942 19:59:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:16.942 19:59:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:16.942 19:59:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:17.204 19:59:14 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:17.204 19:59:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:17.204 19:59:14 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:17.204 19:59:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:04:17.204 19:59:14 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.750 Waiting for block devices as requested 00:04:19.750 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:04:20.012 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:20.272 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:20.272 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:20.272 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:04:20.272 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:20.533 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:04:20.533 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:20.533 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:04:20.533 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:20.793 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:04:20.793 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:04:20.793 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:04:21.053 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:21.053 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:04:21.053 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:21.053 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:04:21.314 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:04:21.575 19:59:19 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:21.575 19:59:19 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:03:00.0 00:04:21.575 19:59:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.575 19:59:19 -- common/autotest_common.sh@1487 -- # grep 0000:03:00.0/nvme/nvme 00:04:21.575 19:59:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:04:21.575 19:59:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 ]] 00:04:21.575 19:59:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:04:21.575 19:59:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:21.575 19:59:19 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:04:21.575 19:59:19 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:04:21.575 19:59:19 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:04:21.575 19:59:19 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:21.575 19:59:19 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:21.575 19:59:19 -- common/autotest_common.sh@1530 -- # oacs=' 0x5e' 00:04:21.575 19:59:19 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:21.575 19:59:19 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:21.575 19:59:19 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:21.575 19:59:19 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:04:21.575 19:59:19 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:21.575 19:59:19 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:21.575 19:59:19 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:21.575 19:59:19 -- common/autotest_common.sh@1542 -- # continue 00:04:21.575 19:59:19 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:21.575 19:59:19 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:04:21.575 19:59:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.575 19:59:19 -- common/autotest_common.sh@1487 -- # grep 0000:c9:00.0/nvme/nvme 00:04:21.575 19:59:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:04:21.575 19:59:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:04:21.576 19:59:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:04:21.576 19:59:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:21.576 19:59:19 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:21.576 19:59:19 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:21.576 19:59:19 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:21.576 19:59:19 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:21.576 19:59:19 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:21.576 19:59:19 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:04:21.576 19:59:19 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:21.576 19:59:19 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:21.576 19:59:19 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:21.576 19:59:19 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:21.576 19:59:19 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:21.576 19:59:19 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:21.576 19:59:19 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:21.576 19:59:19 -- common/autotest_common.sh@1542 -- # continue 00:04:21.576 19:59:19 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:21.576 19:59:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:21.576 19:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.576 19:59:19 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:21.576 19:59:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.576 19:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.576 19:59:19 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:04:24.122 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.122 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.122 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.122 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.122 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.122 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.122 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.122 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.122 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.122 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.122 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.383 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.383 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.383 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.383 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.383 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.954 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:04:25.215 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:04:25.215 19:59:23 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:25.215 19:59:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:25.215 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.478 19:59:23 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:25.478 19:59:23 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:25.478 19:59:23 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.478 19:59:23 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:25.478 19:59:23 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:25.478 19:59:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:25.478 19:59:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:25.478 19:59:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:25.478 19:59:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.478 19:59:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.478 19:59:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:25.478 19:59:23 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:25.478 19:59:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:04:25.478 19:59:23 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:25.478 19:59:23 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:03:00.0/device 00:04:25.478 19:59:23 -- common/autotest_common.sh@1565 -- # device=0x51c3 00:04:25.478 19:59:23 -- common/autotest_common.sh@1566 -- # [[ 0x51c3 == \0\x\0\a\5\4 ]] 00:04:25.478 19:59:23 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:25.478 19:59:23 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:04:25.478 19:59:23 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:04:25.478 19:59:23 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:25.478 19:59:23 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:25.478 19:59:23 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:25.478 19:59:23 -- common/autotest_common.sh@1578 -- # return 0 00:04:25.478 19:59:23 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:25.478 19:59:23 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:25.478 19:59:23 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:25.478 19:59:23 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:25.478 19:59:23 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:25.478 19:59:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:25.478 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.478 19:59:23 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:04:25.478 19:59:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.478 19:59:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.478 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.478 ************************************ 00:04:25.478 START TEST env 00:04:25.478 ************************************ 00:04:25.478 19:59:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:04:25.478 * Looking for test storage... 00:04:25.478 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:04:25.478 19:59:23 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.478 19:59:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.478 19:59:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.478 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.478 ************************************ 00:04:25.478 START TEST env_memory 00:04:25.478 ************************************ 00:04:25.478 19:59:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.478 00:04:25.478 00:04:25.478 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.478 http://cunit.sourceforge.net/ 00:04:25.478 00:04:25.478 00:04:25.478 Suite: memory 00:04:25.478 Test: alloc and free memory map ...[2024-04-25 19:59:23.405411] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:25.739 passed 00:04:25.739 Test: mem map translation ...[2024-04-25 19:59:23.452798] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:25.739 [2024-04-25 19:59:23.452832] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:25.739 [2024-04-25 19:59:23.452912] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:25.739 [2024-04-25 19:59:23.452936] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:25.739 passed 00:04:25.739 Test: mem map registration ...[2024-04-25 19:59:23.539535] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:25.739 [2024-04-25 19:59:23.539571] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:25.739 passed 00:04:25.739 Test: mem map adjacent registrations ...passed 00:04:25.739 00:04:25.739 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.739 suites 1 1 n/a 0 0 00:04:25.739 tests 4 4 4 0 0 00:04:25.739 asserts 152 152 152 0 n/a 00:04:25.739 00:04:25.739 Elapsed time = 0.294 seconds 00:04:25.739 00:04:25.739 real 0m0.314s 00:04:25.739 user 0m0.295s 00:04:25.739 sys 0m0.018s 00:04:25.740 19:59:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.740 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.740 ************************************ 00:04:25.740 END TEST env_memory 00:04:25.740 ************************************ 00:04:26.000 19:59:23 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:26.000 19:59:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.000 19:59:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.000 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:04:26.000 ************************************ 00:04:26.000 START TEST env_vtophys 00:04:26.000 ************************************ 00:04:26.000 19:59:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:26.000 EAL: lib.eal log level changed from notice to debug 00:04:26.000 EAL: Detected lcore 0 as core 0 on socket 0 00:04:26.000 EAL: Detected lcore 1 as core 1 on socket 0 00:04:26.000 EAL: Detected lcore 2 as core 2 on socket 0 00:04:26.000 EAL: Detected lcore 3 as core 3 on socket 0 00:04:26.000 EAL: Detected lcore 4 as core 4 on socket 0 00:04:26.000 EAL: Detected lcore 5 as core 5 on socket 0 00:04:26.000 EAL: Detected lcore 6 as core 6 on socket 0 00:04:26.000 EAL: Detected lcore 7 as core 7 on socket 0 00:04:26.000 EAL: Detected lcore 8 as core 8 on socket 0 00:04:26.000 EAL: Detected lcore 9 as core 9 on socket 0 00:04:26.000 EAL: Detected lcore 10 as core 10 on socket 0 00:04:26.000 EAL: Detected lcore 11 as core 11 on socket 0 00:04:26.000 EAL: Detected lcore 12 as core 12 on socket 0 00:04:26.000 EAL: Detected lcore 13 as core 13 on socket 0 00:04:26.000 EAL: Detected lcore 14 as core 14 on socket 0 00:04:26.000 EAL: Detected lcore 15 as core 15 on socket 0 00:04:26.000 EAL: Detected lcore 16 as core 16 on socket 0 00:04:26.000 EAL: Detected lcore 17 as core 17 on socket 0 00:04:26.000 EAL: Detected lcore 18 as core 18 on socket 0 00:04:26.000 EAL: Detected lcore 19 as core 19 on socket 0 00:04:26.000 EAL: Detected lcore 20 as core 20 on socket 0 00:04:26.000 EAL: Detected lcore 21 as core 21 on socket 0 00:04:26.000 EAL: Detected lcore 22 as core 22 on socket 0 00:04:26.001 EAL: Detected lcore 23 as core 23 on socket 0 00:04:26.001 EAL: Detected lcore 24 as core 24 on socket 0 00:04:26.001 EAL: Detected lcore 25 as core 25 on socket 0 00:04:26.001 EAL: Detected lcore 26 as core 26 on socket 0 00:04:26.001 EAL: Detected lcore 27 as core 27 on socket 0 00:04:26.001 EAL: Detected lcore 28 as core 28 on socket 0 00:04:26.001 EAL: Detected lcore 29 as core 29 on socket 0 00:04:26.001 EAL: Detected lcore 30 as core 30 on socket 0 00:04:26.001 EAL: Detected lcore 31 as core 31 on socket 0 00:04:26.001 EAL: Detected lcore 32 as core 0 on socket 1 00:04:26.001 EAL: Detected lcore 33 as core 1 on socket 1 00:04:26.001 EAL: Detected lcore 34 as core 2 on socket 1 00:04:26.001 EAL: Detected lcore 35 as core 3 on socket 1 00:04:26.001 EAL: Detected lcore 36 as core 4 on socket 1 00:04:26.001 EAL: Detected lcore 37 as core 5 on socket 1 00:04:26.001 EAL: Detected lcore 38 as core 6 on socket 1 00:04:26.001 EAL: Detected lcore 39 as core 7 on socket 1 00:04:26.001 EAL: Detected lcore 40 as core 8 on socket 1 00:04:26.001 EAL: Detected lcore 41 as core 9 on socket 1 00:04:26.001 EAL: Detected lcore 42 as core 10 on socket 1 00:04:26.001 EAL: Detected lcore 43 as core 11 on socket 1 00:04:26.001 EAL: Detected lcore 44 as core 12 on socket 1 00:04:26.001 EAL: Detected lcore 45 as core 13 on socket 1 00:04:26.001 EAL: Detected lcore 46 as core 14 on socket 1 00:04:26.001 EAL: Detected lcore 47 as core 15 on socket 1 00:04:26.001 EAL: Detected lcore 48 as core 16 on socket 1 00:04:26.001 EAL: Detected lcore 49 as core 17 on socket 1 00:04:26.001 EAL: Detected lcore 50 as core 18 on socket 1 00:04:26.001 EAL: Detected lcore 51 as core 19 on socket 1 00:04:26.001 EAL: Detected lcore 52 as core 20 on socket 1 00:04:26.001 EAL: Detected lcore 53 as core 21 on socket 1 00:04:26.001 EAL: Detected lcore 54 as core 22 on socket 1 00:04:26.001 EAL: Detected lcore 55 as core 23 on socket 1 00:04:26.001 EAL: Detected lcore 56 as core 24 on socket 1 00:04:26.001 EAL: Detected lcore 57 as core 25 on socket 1 00:04:26.001 EAL: Detected lcore 58 as core 26 on socket 1 00:04:26.001 EAL: Detected lcore 59 as core 27 on socket 1 00:04:26.001 EAL: Detected lcore 60 as core 28 on socket 1 00:04:26.001 EAL: Detected lcore 61 as core 29 on socket 1 00:04:26.001 EAL: Detected lcore 62 as core 30 on socket 1 00:04:26.001 EAL: Detected lcore 63 as core 31 on socket 1 00:04:26.001 EAL: Detected lcore 64 as core 0 on socket 0 00:04:26.001 EAL: Detected lcore 65 as core 1 on socket 0 00:04:26.001 EAL: Detected lcore 66 as core 2 on socket 0 00:04:26.001 EAL: Detected lcore 67 as core 3 on socket 0 00:04:26.001 EAL: Detected lcore 68 as core 4 on socket 0 00:04:26.001 EAL: Detected lcore 69 as core 5 on socket 0 00:04:26.001 EAL: Detected lcore 70 as core 6 on socket 0 00:04:26.001 EAL: Detected lcore 71 as core 7 on socket 0 00:04:26.001 EAL: Detected lcore 72 as core 8 on socket 0 00:04:26.001 EAL: Detected lcore 73 as core 9 on socket 0 00:04:26.001 EAL: Detected lcore 74 as core 10 on socket 0 00:04:26.001 EAL: Detected lcore 75 as core 11 on socket 0 00:04:26.001 EAL: Detected lcore 76 as core 12 on socket 0 00:04:26.001 EAL: Detected lcore 77 as core 13 on socket 0 00:04:26.001 EAL: Detected lcore 78 as core 14 on socket 0 00:04:26.001 EAL: Detected lcore 79 as core 15 on socket 0 00:04:26.001 EAL: Detected lcore 80 as core 16 on socket 0 00:04:26.001 EAL: Detected lcore 81 as core 17 on socket 0 00:04:26.001 EAL: Detected lcore 82 as core 18 on socket 0 00:04:26.001 EAL: Detected lcore 83 as core 19 on socket 0 00:04:26.001 EAL: Detected lcore 84 as core 20 on socket 0 00:04:26.001 EAL: Detected lcore 85 as core 21 on socket 0 00:04:26.001 EAL: Detected lcore 86 as core 22 on socket 0 00:04:26.001 EAL: Detected lcore 87 as core 23 on socket 0 00:04:26.001 EAL: Detected lcore 88 as core 24 on socket 0 00:04:26.001 EAL: Detected lcore 89 as core 25 on socket 0 00:04:26.001 EAL: Detected lcore 90 as core 26 on socket 0 00:04:26.001 EAL: Detected lcore 91 as core 27 on socket 0 00:04:26.001 EAL: Detected lcore 92 as core 28 on socket 0 00:04:26.001 EAL: Detected lcore 93 as core 29 on socket 0 00:04:26.001 EAL: Detected lcore 94 as core 30 on socket 0 00:04:26.001 EAL: Detected lcore 95 as core 31 on socket 0 00:04:26.001 EAL: Detected lcore 96 as core 0 on socket 1 00:04:26.001 EAL: Detected lcore 97 as core 1 on socket 1 00:04:26.001 EAL: Detected lcore 98 as core 2 on socket 1 00:04:26.001 EAL: Detected lcore 99 as core 3 on socket 1 00:04:26.001 EAL: Detected lcore 100 as core 4 on socket 1 00:04:26.001 EAL: Detected lcore 101 as core 5 on socket 1 00:04:26.001 EAL: Detected lcore 102 as core 6 on socket 1 00:04:26.001 EAL: Detected lcore 103 as core 7 on socket 1 00:04:26.001 EAL: Detected lcore 104 as core 8 on socket 1 00:04:26.001 EAL: Detected lcore 105 as core 9 on socket 1 00:04:26.001 EAL: Detected lcore 106 as core 10 on socket 1 00:04:26.001 EAL: Detected lcore 107 as core 11 on socket 1 00:04:26.001 EAL: Detected lcore 108 as core 12 on socket 1 00:04:26.001 EAL: Detected lcore 109 as core 13 on socket 1 00:04:26.001 EAL: Detected lcore 110 as core 14 on socket 1 00:04:26.001 EAL: Detected lcore 111 as core 15 on socket 1 00:04:26.001 EAL: Detected lcore 112 as core 16 on socket 1 00:04:26.001 EAL: Detected lcore 113 as core 17 on socket 1 00:04:26.001 EAL: Detected lcore 114 as core 18 on socket 1 00:04:26.001 EAL: Detected lcore 115 as core 19 on socket 1 00:04:26.001 EAL: Detected lcore 116 as core 20 on socket 1 00:04:26.001 EAL: Detected lcore 117 as core 21 on socket 1 00:04:26.001 EAL: Detected lcore 118 as core 22 on socket 1 00:04:26.001 EAL: Detected lcore 119 as core 23 on socket 1 00:04:26.001 EAL: Detected lcore 120 as core 24 on socket 1 00:04:26.001 EAL: Detected lcore 121 as core 25 on socket 1 00:04:26.001 EAL: Detected lcore 122 as core 26 on socket 1 00:04:26.001 EAL: Detected lcore 123 as core 27 on socket 1 00:04:26.001 EAL: Detected lcore 124 as core 28 on socket 1 00:04:26.001 EAL: Detected lcore 125 as core 29 on socket 1 00:04:26.001 EAL: Detected lcore 126 as core 30 on socket 1 00:04:26.001 EAL: Detected lcore 127 as core 31 on socket 1 00:04:26.001 EAL: Maximum logical cores by configuration: 128 00:04:26.001 EAL: Detected CPU lcores: 128 00:04:26.001 EAL: Detected NUMA nodes: 2 00:04:26.001 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:26.001 EAL: Detected shared linkage of DPDK 00:04:26.001 EAL: No shared files mode enabled, IPC will be disabled 00:04:26.001 EAL: Bus pci wants IOVA as 'DC' 00:04:26.001 EAL: Buses did not request a specific IOVA mode. 00:04:26.001 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:26.001 EAL: Selected IOVA mode 'VA' 00:04:26.001 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.001 EAL: Probing VFIO support... 00:04:26.001 EAL: IOMMU type 1 (Type 1) is supported 00:04:26.001 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:26.001 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:26.001 EAL: VFIO support initialized 00:04:26.001 EAL: Ask a virtual area of 0x2e000 bytes 00:04:26.001 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:26.001 EAL: Setting up physically contiguous memory... 00:04:26.001 EAL: Setting maximum number of open files to 524288 00:04:26.001 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:26.001 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:26.001 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:26.001 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.001 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:26.001 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.001 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.001 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:26.001 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:26.001 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.001 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:26.001 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.001 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.001 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:26.001 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:26.001 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.001 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:26.001 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.001 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.001 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:26.001 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:26.001 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.001 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:26.001 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.001 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.001 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:26.001 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:26.001 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:26.001 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.001 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:26.001 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.001 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.001 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:26.001 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:26.001 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.001 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:26.001 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.001 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.001 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:26.001 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:26.001 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.001 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:26.001 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.001 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.001 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:26.001 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:26.001 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.001 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:26.001 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.001 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.001 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:26.001 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:26.001 EAL: Hugepages will be freed exactly as allocated. 00:04:26.001 EAL: No shared files mode enabled, IPC is disabled 00:04:26.001 EAL: No shared files mode enabled, IPC is disabled 00:04:26.001 EAL: TSC frequency is ~1900000 KHz 00:04:26.001 EAL: Main lcore 0 is ready (tid=7f98c9ccca40;cpuset=[0]) 00:04:26.001 EAL: Trying to obtain current memory policy. 00:04:26.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.001 EAL: Restoring previous memory policy: 0 00:04:26.001 EAL: request: mp_malloc_sync 00:04:26.001 EAL: No shared files mode enabled, IPC is disabled 00:04:26.002 EAL: Heap on socket 0 was expanded by 2MB 00:04:26.002 EAL: No shared files mode enabled, IPC is disabled 00:04:26.002 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:26.002 EAL: Mem event callback 'spdk:(nil)' registered 00:04:26.002 00:04:26.002 00:04:26.002 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.002 http://cunit.sourceforge.net/ 00:04:26.002 00:04:26.002 00:04:26.002 Suite: components_suite 00:04:26.263 Test: vtophys_malloc_test ...passed 00:04:26.263 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:26.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.263 EAL: Restoring previous memory policy: 4 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was expanded by 4MB 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was shrunk by 4MB 00:04:26.263 EAL: Trying to obtain current memory policy. 00:04:26.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.263 EAL: Restoring previous memory policy: 4 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was expanded by 6MB 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was shrunk by 6MB 00:04:26.263 EAL: Trying to obtain current memory policy. 00:04:26.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.263 EAL: Restoring previous memory policy: 4 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was expanded by 10MB 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was shrunk by 10MB 00:04:26.263 EAL: Trying to obtain current memory policy. 00:04:26.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.263 EAL: Restoring previous memory policy: 4 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was expanded by 18MB 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was shrunk by 18MB 00:04:26.263 EAL: Trying to obtain current memory policy. 00:04:26.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.263 EAL: Restoring previous memory policy: 4 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was expanded by 34MB 00:04:26.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.263 EAL: request: mp_malloc_sync 00:04:26.263 EAL: No shared files mode enabled, IPC is disabled 00:04:26.263 EAL: Heap on socket 0 was shrunk by 34MB 00:04:26.548 EAL: Trying to obtain current memory policy. 00:04:26.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.548 EAL: Restoring previous memory policy: 4 00:04:26.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.548 EAL: request: mp_malloc_sync 00:04:26.548 EAL: No shared files mode enabled, IPC is disabled 00:04:26.548 EAL: Heap on socket 0 was expanded by 66MB 00:04:26.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.548 EAL: request: mp_malloc_sync 00:04:26.548 EAL: No shared files mode enabled, IPC is disabled 00:04:26.548 EAL: Heap on socket 0 was shrunk by 66MB 00:04:26.548 EAL: Trying to obtain current memory policy. 00:04:26.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.548 EAL: Restoring previous memory policy: 4 00:04:26.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.548 EAL: request: mp_malloc_sync 00:04:26.548 EAL: No shared files mode enabled, IPC is disabled 00:04:26.548 EAL: Heap on socket 0 was expanded by 130MB 00:04:26.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.548 EAL: request: mp_malloc_sync 00:04:26.548 EAL: No shared files mode enabled, IPC is disabled 00:04:26.548 EAL: Heap on socket 0 was shrunk by 130MB 00:04:26.548 EAL: Trying to obtain current memory policy. 00:04:26.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.809 EAL: Restoring previous memory policy: 4 00:04:26.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.809 EAL: request: mp_malloc_sync 00:04:26.809 EAL: No shared files mode enabled, IPC is disabled 00:04:26.809 EAL: Heap on socket 0 was expanded by 258MB 00:04:26.809 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.809 EAL: request: mp_malloc_sync 00:04:26.809 EAL: No shared files mode enabled, IPC is disabled 00:04:26.809 EAL: Heap on socket 0 was shrunk by 258MB 00:04:27.086 EAL: Trying to obtain current memory policy. 00:04:27.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.086 EAL: Restoring previous memory policy: 4 00:04:27.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.086 EAL: request: mp_malloc_sync 00:04:27.086 EAL: No shared files mode enabled, IPC is disabled 00:04:27.086 EAL: Heap on socket 0 was expanded by 514MB 00:04:27.379 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.379 EAL: request: mp_malloc_sync 00:04:27.379 EAL: No shared files mode enabled, IPC is disabled 00:04:27.379 EAL: Heap on socket 0 was shrunk by 514MB 00:04:27.641 EAL: Trying to obtain current memory policy. 00:04:27.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.901 EAL: Restoring previous memory policy: 4 00:04:27.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.901 EAL: request: mp_malloc_sync 00:04:27.901 EAL: No shared files mode enabled, IPC is disabled 00:04:27.901 EAL: Heap on socket 0 was expanded by 1026MB 00:04:28.471 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.471 EAL: request: mp_malloc_sync 00:04:28.471 EAL: No shared files mode enabled, IPC is disabled 00:04:28.471 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:29.041 passed 00:04:29.041 00:04:29.041 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.041 suites 1 1 n/a 0 0 00:04:29.041 tests 2 2 2 0 0 00:04:29.041 asserts 497 497 497 0 n/a 00:04:29.041 00:04:29.041 Elapsed time = 2.934 seconds 00:04:29.041 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.041 EAL: request: mp_malloc_sync 00:04:29.041 EAL: No shared files mode enabled, IPC is disabled 00:04:29.041 EAL: Heap on socket 0 was shrunk by 2MB 00:04:29.041 EAL: No shared files mode enabled, IPC is disabled 00:04:29.041 EAL: No shared files mode enabled, IPC is disabled 00:04:29.041 EAL: No shared files mode enabled, IPC is disabled 00:04:29.041 00:04:29.041 real 0m3.182s 00:04:29.041 user 0m2.488s 00:04:29.041 sys 0m0.652s 00:04:29.041 19:59:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.041 19:59:26 -- common/autotest_common.sh@10 -- # set +x 00:04:29.041 ************************************ 00:04:29.041 END TEST env_vtophys 00:04:29.041 ************************************ 00:04:29.041 19:59:26 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:29.041 19:59:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.041 19:59:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.041 19:59:26 -- common/autotest_common.sh@10 -- # set +x 00:04:29.041 ************************************ 00:04:29.041 START TEST env_pci 00:04:29.041 ************************************ 00:04:29.041 19:59:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:29.041 00:04:29.041 00:04:29.041 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.041 http://cunit.sourceforge.net/ 00:04:29.041 00:04:29.041 00:04:29.041 Suite: pci 00:04:29.041 Test: pci_hook ...[2024-04-25 19:59:26.957365] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1313842 has claimed it 00:04:29.302 EAL: Cannot find device (10000:00:01.0) 00:04:29.302 EAL: Failed to attach device on primary process 00:04:29.302 passed 00:04:29.302 00:04:29.302 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.302 suites 1 1 n/a 0 0 00:04:29.302 tests 1 1 1 0 0 00:04:29.302 asserts 25 25 25 0 n/a 00:04:29.302 00:04:29.302 Elapsed time = 0.058 seconds 00:04:29.302 00:04:29.302 real 0m0.122s 00:04:29.302 user 0m0.040s 00:04:29.302 sys 0m0.081s 00:04:29.302 19:59:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.302 19:59:27 -- common/autotest_common.sh@10 -- # set +x 00:04:29.302 ************************************ 00:04:29.302 END TEST env_pci 00:04:29.302 ************************************ 00:04:29.302 19:59:27 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:29.302 19:59:27 -- env/env.sh@15 -- # uname 00:04:29.302 19:59:27 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:29.302 19:59:27 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:29.302 19:59:27 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.302 19:59:27 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:29.302 19:59:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.302 19:59:27 -- common/autotest_common.sh@10 -- # set +x 00:04:29.302 ************************************ 00:04:29.303 START TEST env_dpdk_post_init 00:04:29.303 ************************************ 00:04:29.303 19:59:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.303 EAL: Detected CPU lcores: 128 00:04:29.303 EAL: Detected NUMA nodes: 2 00:04:29.303 EAL: Detected shared linkage of DPDK 00:04:29.303 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.303 EAL: Selected IOVA mode 'VA' 00:04:29.303 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.303 EAL: VFIO support initialized 00:04:29.303 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.563 EAL: Using IOMMU type 1 (Type 1) 00:04:29.563 EAL: Probe PCI driver: spdk_nvme (1344:51c3) device: 0000:03:00.0 (socket 0) 00:04:29.823 EAL: Ignore mapping IO port bar(1) 00:04:29.823 EAL: Ignore mapping IO port bar(3) 00:04:29.823 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:04:30.084 EAL: Ignore mapping IO port bar(1) 00:04:30.084 EAL: Ignore mapping IO port bar(3) 00:04:30.084 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:04:30.345 EAL: Ignore mapping IO port bar(1) 00:04:30.345 EAL: Ignore mapping IO port bar(3) 00:04:30.345 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:04:30.345 EAL: Ignore mapping IO port bar(1) 00:04:30.345 EAL: Ignore mapping IO port bar(3) 00:04:30.605 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:04:30.605 EAL: Ignore mapping IO port bar(1) 00:04:30.605 EAL: Ignore mapping IO port bar(3) 00:04:30.866 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:04:30.866 EAL: Ignore mapping IO port bar(1) 00:04:30.866 EAL: Ignore mapping IO port bar(3) 00:04:31.127 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:04:31.127 EAL: Ignore mapping IO port bar(1) 00:04:31.127 EAL: Ignore mapping IO port bar(3) 00:04:31.127 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:04:31.388 EAL: Ignore mapping IO port bar(1) 00:04:31.388 EAL: Ignore mapping IO port bar(3) 00:04:31.388 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:04:31.649 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:c9:00.0 (socket 1) 00:04:31.910 EAL: Ignore mapping IO port bar(1) 00:04:31.910 EAL: Ignore mapping IO port bar(3) 00:04:31.910 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:04:32.170 EAL: Ignore mapping IO port bar(1) 00:04:32.170 EAL: Ignore mapping IO port bar(3) 00:04:32.170 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:04:32.170 EAL: Ignore mapping IO port bar(1) 00:04:32.170 EAL: Ignore mapping IO port bar(3) 00:04:32.431 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:04:32.431 EAL: Ignore mapping IO port bar(1) 00:04:32.431 EAL: Ignore mapping IO port bar(3) 00:04:32.691 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:04:32.691 EAL: Ignore mapping IO port bar(1) 00:04:32.691 EAL: Ignore mapping IO port bar(3) 00:04:32.691 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:04:32.951 EAL: Ignore mapping IO port bar(1) 00:04:32.951 EAL: Ignore mapping IO port bar(3) 00:04:32.951 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:04:33.211 EAL: Ignore mapping IO port bar(1) 00:04:33.211 EAL: Ignore mapping IO port bar(3) 00:04:33.211 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:04:33.471 EAL: Ignore mapping IO port bar(1) 00:04:33.471 EAL: Ignore mapping IO port bar(3) 00:04:33.471 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:04:34.413 EAL: Releasing PCI mapped resource for 0000:03:00.0 00:04:34.413 EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x202001000000 00:04:34.413 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:04:34.413 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x2020011c0000 00:04:34.413 Starting DPDK initialization... 00:04:34.413 Starting SPDK post initialization... 00:04:34.413 SPDK NVMe probe 00:04:34.413 Attaching to 0000:03:00.0 00:04:34.413 Attaching to 0000:c9:00.0 00:04:34.413 Attached to 0000:c9:00.0 00:04:34.413 Attached to 0000:03:00.0 00:04:34.413 Cleaning up... 00:04:36.342 00:04:36.342 real 0m6.894s 00:04:36.342 user 0m1.050s 00:04:36.342 sys 0m0.152s 00:04:36.342 19:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.342 19:59:33 -- common/autotest_common.sh@10 -- # set +x 00:04:36.342 ************************************ 00:04:36.342 END TEST env_dpdk_post_init 00:04:36.342 ************************************ 00:04:36.342 19:59:34 -- env/env.sh@26 -- # uname 00:04:36.342 19:59:34 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.342 19:59:34 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.342 19:59:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.342 19:59:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.342 19:59:34 -- common/autotest_common.sh@10 -- # set +x 00:04:36.342 ************************************ 00:04:36.342 START TEST env_mem_callbacks 00:04:36.342 ************************************ 00:04:36.342 19:59:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.342 EAL: Detected CPU lcores: 128 00:04:36.342 EAL: Detected NUMA nodes: 2 00:04:36.342 EAL: Detected shared linkage of DPDK 00:04:36.342 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.342 EAL: Selected IOVA mode 'VA' 00:04:36.342 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.342 EAL: VFIO support initialized 00:04:36.342 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.342 00:04:36.342 00:04:36.342 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.342 http://cunit.sourceforge.net/ 00:04:36.342 00:04:36.342 00:04:36.342 Suite: memory 00:04:36.342 Test: test ... 00:04:36.342 register 0x200000200000 2097152 00:04:36.342 malloc 3145728 00:04:36.342 register 0x200000400000 4194304 00:04:36.342 buf 0x2000004fffc0 len 3145728 PASSED 00:04:36.342 malloc 64 00:04:36.342 buf 0x2000004ffec0 len 64 PASSED 00:04:36.342 malloc 4194304 00:04:36.342 register 0x200000800000 6291456 00:04:36.342 buf 0x2000009fffc0 len 4194304 PASSED 00:04:36.342 free 0x2000004fffc0 3145728 00:04:36.342 free 0x2000004ffec0 64 00:04:36.342 unregister 0x200000400000 4194304 PASSED 00:04:36.342 free 0x2000009fffc0 4194304 00:04:36.342 unregister 0x200000800000 6291456 PASSED 00:04:36.342 malloc 8388608 00:04:36.342 register 0x200000400000 10485760 00:04:36.342 buf 0x2000005fffc0 len 8388608 PASSED 00:04:36.342 free 0x2000005fffc0 8388608 00:04:36.342 unregister 0x200000400000 10485760 PASSED 00:04:36.342 passed 00:04:36.342 00:04:36.342 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.342 suites 1 1 n/a 0 0 00:04:36.342 tests 1 1 1 0 0 00:04:36.342 asserts 15 15 15 0 n/a 00:04:36.342 00:04:36.342 Elapsed time = 0.022 seconds 00:04:36.342 00:04:36.342 real 0m0.136s 00:04:36.342 user 0m0.045s 00:04:36.342 sys 0m0.091s 00:04:36.342 19:59:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.342 19:59:34 -- common/autotest_common.sh@10 -- # set +x 00:04:36.342 ************************************ 00:04:36.342 END TEST env_mem_callbacks 00:04:36.342 ************************************ 00:04:36.342 00:04:36.342 real 0m10.894s 00:04:36.342 user 0m4.003s 00:04:36.342 sys 0m1.189s 00:04:36.342 19:59:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.342 19:59:34 -- common/autotest_common.sh@10 -- # set +x 00:04:36.342 ************************************ 00:04:36.342 END TEST env 00:04:36.342 ************************************ 00:04:36.342 19:59:34 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.342 19:59:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.342 19:59:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.342 19:59:34 -- common/autotest_common.sh@10 -- # set +x 00:04:36.342 ************************************ 00:04:36.342 START TEST rpc 00:04:36.342 ************************************ 00:04:36.342 19:59:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.603 * Looking for test storage... 00:04:36.603 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:36.603 19:59:34 -- rpc/rpc.sh@65 -- # spdk_pid=1315438 00:04:36.603 19:59:34 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.603 19:59:34 -- rpc/rpc.sh@67 -- # waitforlisten 1315438 00:04:36.603 19:59:34 -- common/autotest_common.sh@819 -- # '[' -z 1315438 ']' 00:04:36.603 19:59:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.603 19:59:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:36.603 19:59:34 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:36.603 19:59:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.603 19:59:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:36.603 19:59:34 -- common/autotest_common.sh@10 -- # set +x 00:04:36.603 [2024-04-25 19:59:34.389676] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:36.603 [2024-04-25 19:59:34.389827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315438 ] 00:04:36.603 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.603 [2024-04-25 19:59:34.519711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.864 [2024-04-25 19:59:34.614072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.864 [2024-04-25 19:59:34.614282] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.864 [2024-04-25 19:59:34.614295] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1315438' to capture a snapshot of events at runtime. 00:04:36.864 [2024-04-25 19:59:34.614307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1315438 for offline analysis/debug. 00:04:36.864 [2024-04-25 19:59:34.614336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.435 19:59:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:37.435 19:59:35 -- common/autotest_common.sh@852 -- # return 0 00:04:37.435 19:59:35 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:37.435 19:59:35 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:37.435 19:59:35 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.435 19:59:35 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.435 19:59:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.435 19:59:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.435 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.435 ************************************ 00:04:37.435 START TEST rpc_integrity 00:04:37.435 ************************************ 00:04:37.435 19:59:35 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:37.435 19:59:35 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.435 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.435 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.435 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.435 19:59:35 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.435 19:59:35 -- rpc/rpc.sh@13 -- # jq length 00:04:37.435 19:59:35 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.435 19:59:35 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.435 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.435 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.435 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.435 19:59:35 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.435 19:59:35 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.435 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.435 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.435 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.435 19:59:35 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.435 { 00:04:37.435 "name": "Malloc0", 00:04:37.435 "aliases": [ 00:04:37.435 "4da8d540-a23c-492f-a02f-91e6a58290c0" 00:04:37.435 ], 00:04:37.435 "product_name": "Malloc disk", 00:04:37.435 "block_size": 512, 00:04:37.435 "num_blocks": 16384, 00:04:37.435 "uuid": "4da8d540-a23c-492f-a02f-91e6a58290c0", 00:04:37.435 "assigned_rate_limits": { 00:04:37.435 "rw_ios_per_sec": 0, 00:04:37.435 "rw_mbytes_per_sec": 0, 00:04:37.435 "r_mbytes_per_sec": 0, 00:04:37.435 "w_mbytes_per_sec": 0 00:04:37.435 }, 00:04:37.435 "claimed": false, 00:04:37.435 "zoned": false, 00:04:37.435 "supported_io_types": { 00:04:37.435 "read": true, 00:04:37.435 "write": true, 00:04:37.435 "unmap": true, 00:04:37.435 "write_zeroes": true, 00:04:37.435 "flush": true, 00:04:37.435 "reset": true, 00:04:37.435 "compare": false, 00:04:37.435 "compare_and_write": false, 00:04:37.435 "abort": true, 00:04:37.435 "nvme_admin": false, 00:04:37.435 "nvme_io": false 00:04:37.435 }, 00:04:37.435 "memory_domains": [ 00:04:37.435 { 00:04:37.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.435 "dma_device_type": 2 00:04:37.435 } 00:04:37.435 ], 00:04:37.435 "driver_specific": {} 00:04:37.435 } 00:04:37.435 ]' 00:04:37.435 19:59:35 -- rpc/rpc.sh@17 -- # jq length 00:04:37.435 19:59:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.435 19:59:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.435 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.435 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.435 [2024-04-25 19:59:35.213407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.435 [2024-04-25 19:59:35.213465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.435 [2024-04-25 19:59:35.213503] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020180 00:04:37.435 [2024-04-25 19:59:35.213516] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.435 [2024-04-25 19:59:35.215807] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.435 [2024-04-25 19:59:35.215838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.435 Passthru0 00:04:37.436 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.436 19:59:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.436 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.436 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.436 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.436 19:59:35 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.436 { 00:04:37.436 "name": "Malloc0", 00:04:37.436 "aliases": [ 00:04:37.436 "4da8d540-a23c-492f-a02f-91e6a58290c0" 00:04:37.436 ], 00:04:37.436 "product_name": "Malloc disk", 00:04:37.436 "block_size": 512, 00:04:37.436 "num_blocks": 16384, 00:04:37.436 "uuid": "4da8d540-a23c-492f-a02f-91e6a58290c0", 00:04:37.436 "assigned_rate_limits": { 00:04:37.436 "rw_ios_per_sec": 0, 00:04:37.436 "rw_mbytes_per_sec": 0, 00:04:37.436 "r_mbytes_per_sec": 0, 00:04:37.436 "w_mbytes_per_sec": 0 00:04:37.436 }, 00:04:37.436 "claimed": true, 00:04:37.436 "claim_type": "exclusive_write", 00:04:37.436 "zoned": false, 00:04:37.436 "supported_io_types": { 00:04:37.436 "read": true, 00:04:37.436 "write": true, 00:04:37.436 "unmap": true, 00:04:37.436 "write_zeroes": true, 00:04:37.436 "flush": true, 00:04:37.436 "reset": true, 00:04:37.436 "compare": false, 00:04:37.436 "compare_and_write": false, 00:04:37.436 "abort": true, 00:04:37.436 "nvme_admin": false, 00:04:37.436 "nvme_io": false 00:04:37.436 }, 00:04:37.436 "memory_domains": [ 00:04:37.436 { 00:04:37.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.436 "dma_device_type": 2 00:04:37.436 } 00:04:37.436 ], 00:04:37.436 "driver_specific": {} 00:04:37.436 }, 00:04:37.436 { 00:04:37.436 "name": "Passthru0", 00:04:37.436 "aliases": [ 00:04:37.436 "dc392cd4-d046-525e-b931-fdb477da5c61" 00:04:37.436 ], 00:04:37.436 "product_name": "passthru", 00:04:37.436 "block_size": 512, 00:04:37.436 "num_blocks": 16384, 00:04:37.436 "uuid": "dc392cd4-d046-525e-b931-fdb477da5c61", 00:04:37.436 "assigned_rate_limits": { 00:04:37.436 "rw_ios_per_sec": 0, 00:04:37.436 "rw_mbytes_per_sec": 0, 00:04:37.436 "r_mbytes_per_sec": 0, 00:04:37.436 "w_mbytes_per_sec": 0 00:04:37.436 }, 00:04:37.436 "claimed": false, 00:04:37.436 "zoned": false, 00:04:37.436 "supported_io_types": { 00:04:37.436 "read": true, 00:04:37.436 "write": true, 00:04:37.436 "unmap": true, 00:04:37.436 "write_zeroes": true, 00:04:37.436 "flush": true, 00:04:37.436 "reset": true, 00:04:37.436 "compare": false, 00:04:37.436 "compare_and_write": false, 00:04:37.436 "abort": true, 00:04:37.436 "nvme_admin": false, 00:04:37.436 "nvme_io": false 00:04:37.436 }, 00:04:37.436 "memory_domains": [ 00:04:37.436 { 00:04:37.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.436 "dma_device_type": 2 00:04:37.436 } 00:04:37.436 ], 00:04:37.436 "driver_specific": { 00:04:37.436 "passthru": { 00:04:37.436 "name": "Passthru0", 00:04:37.436 "base_bdev_name": "Malloc0" 00:04:37.436 } 00:04:37.436 } 00:04:37.436 } 00:04:37.436 ]' 00:04:37.436 19:59:35 -- rpc/rpc.sh@21 -- # jq length 00:04:37.436 19:59:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.436 19:59:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.436 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.436 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.436 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.436 19:59:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.436 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.436 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.436 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.436 19:59:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.436 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.436 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.436 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.436 19:59:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.436 19:59:35 -- rpc/rpc.sh@26 -- # jq length 00:04:37.436 19:59:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.436 00:04:37.436 real 0m0.211s 00:04:37.436 user 0m0.117s 00:04:37.436 sys 0m0.027s 00:04:37.436 19:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.436 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.436 ************************************ 00:04:37.436 END TEST rpc_integrity 00:04:37.436 ************************************ 00:04:37.436 19:59:35 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.436 19:59:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.436 19:59:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.436 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.436 ************************************ 00:04:37.436 START TEST rpc_plugins 00:04:37.436 ************************************ 00:04:37.436 19:59:35 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:37.436 19:59:35 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.436 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.436 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.436 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.436 19:59:35 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.436 19:59:35 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.436 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.436 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.436 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.436 19:59:35 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.436 { 00:04:37.436 "name": "Malloc1", 00:04:37.436 "aliases": [ 00:04:37.436 "47daf982-8421-4efe-8d48-49ed26ac6413" 00:04:37.436 ], 00:04:37.436 "product_name": "Malloc disk", 00:04:37.436 "block_size": 4096, 00:04:37.436 "num_blocks": 256, 00:04:37.436 "uuid": "47daf982-8421-4efe-8d48-49ed26ac6413", 00:04:37.436 "assigned_rate_limits": { 00:04:37.436 "rw_ios_per_sec": 0, 00:04:37.436 "rw_mbytes_per_sec": 0, 00:04:37.436 "r_mbytes_per_sec": 0, 00:04:37.436 "w_mbytes_per_sec": 0 00:04:37.436 }, 00:04:37.436 "claimed": false, 00:04:37.436 "zoned": false, 00:04:37.436 "supported_io_types": { 00:04:37.436 "read": true, 00:04:37.436 "write": true, 00:04:37.436 "unmap": true, 00:04:37.436 "write_zeroes": true, 00:04:37.436 "flush": true, 00:04:37.436 "reset": true, 00:04:37.436 "compare": false, 00:04:37.436 "compare_and_write": false, 00:04:37.436 "abort": true, 00:04:37.436 "nvme_admin": false, 00:04:37.436 "nvme_io": false 00:04:37.436 }, 00:04:37.436 "memory_domains": [ 00:04:37.436 { 00:04:37.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.436 "dma_device_type": 2 00:04:37.436 } 00:04:37.436 ], 00:04:37.436 "driver_specific": {} 00:04:37.436 } 00:04:37.436 ]' 00:04:37.697 19:59:35 -- rpc/rpc.sh@32 -- # jq length 00:04:37.697 19:59:35 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.697 19:59:35 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.697 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.697 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.697 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.697 19:59:35 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.697 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.697 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.697 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.697 19:59:35 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.697 19:59:35 -- rpc/rpc.sh@36 -- # jq length 00:04:37.697 19:59:35 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.697 00:04:37.697 real 0m0.103s 00:04:37.697 user 0m0.066s 00:04:37.697 sys 0m0.011s 00:04:37.697 19:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.697 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.697 ************************************ 00:04:37.697 END TEST rpc_plugins 00:04:37.697 ************************************ 00:04:37.697 19:59:35 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.697 19:59:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.697 19:59:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.697 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.697 ************************************ 00:04:37.697 START TEST rpc_trace_cmd_test 00:04:37.697 ************************************ 00:04:37.697 19:59:35 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:37.697 19:59:35 -- rpc/rpc.sh@40 -- # local info 00:04:37.697 19:59:35 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.697 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.697 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.697 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.697 19:59:35 -- rpc/rpc.sh@42 -- # info='{ 00:04:37.697 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1315438", 00:04:37.697 "tpoint_group_mask": "0x8", 00:04:37.697 "iscsi_conn": { 00:04:37.697 "mask": "0x2", 00:04:37.697 "tpoint_mask": "0x0" 00:04:37.697 }, 00:04:37.697 "scsi": { 00:04:37.697 "mask": "0x4", 00:04:37.697 "tpoint_mask": "0x0" 00:04:37.697 }, 00:04:37.697 "bdev": { 00:04:37.697 "mask": "0x8", 00:04:37.697 "tpoint_mask": "0xffffffffffffffff" 00:04:37.697 }, 00:04:37.697 "nvmf_rdma": { 00:04:37.697 "mask": "0x10", 00:04:37.697 "tpoint_mask": "0x0" 00:04:37.697 }, 00:04:37.697 "nvmf_tcp": { 00:04:37.697 "mask": "0x20", 00:04:37.697 "tpoint_mask": "0x0" 00:04:37.697 }, 00:04:37.698 "ftl": { 00:04:37.698 "mask": "0x40", 00:04:37.698 "tpoint_mask": "0x0" 00:04:37.698 }, 00:04:37.698 "blobfs": { 00:04:37.698 "mask": "0x80", 00:04:37.698 "tpoint_mask": "0x0" 00:04:37.698 }, 00:04:37.698 "dsa": { 00:04:37.698 "mask": "0x200", 00:04:37.698 "tpoint_mask": "0x0" 00:04:37.698 }, 00:04:37.698 "thread": { 00:04:37.698 "mask": "0x400", 00:04:37.698 "tpoint_mask": "0x0" 00:04:37.698 }, 00:04:37.698 "nvme_pcie": { 00:04:37.698 "mask": "0x800", 00:04:37.698 "tpoint_mask": "0x0" 00:04:37.698 }, 00:04:37.698 "iaa": { 00:04:37.698 "mask": "0x1000", 00:04:37.698 "tpoint_mask": "0x0" 00:04:37.698 }, 00:04:37.698 "nvme_tcp": { 00:04:37.698 "mask": "0x2000", 00:04:37.698 "tpoint_mask": "0x0" 00:04:37.698 }, 00:04:37.698 "bdev_nvme": { 00:04:37.698 "mask": "0x4000", 00:04:37.698 "tpoint_mask": "0x0" 00:04:37.698 } 00:04:37.698 }' 00:04:37.698 19:59:35 -- rpc/rpc.sh@43 -- # jq length 00:04:37.698 19:59:35 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:37.698 19:59:35 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.698 19:59:35 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.698 19:59:35 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.698 19:59:35 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.698 19:59:35 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.698 19:59:35 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.698 19:59:35 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.959 19:59:35 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.959 00:04:37.959 real 0m0.174s 00:04:37.959 user 0m0.146s 00:04:37.959 sys 0m0.022s 00:04:37.959 19:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.959 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.959 ************************************ 00:04:37.959 END TEST rpc_trace_cmd_test 00:04:37.959 ************************************ 00:04:37.959 19:59:35 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.959 19:59:35 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.959 19:59:35 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.959 19:59:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.959 19:59:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.959 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.959 ************************************ 00:04:37.959 START TEST rpc_daemon_integrity 00:04:37.959 ************************************ 00:04:37.959 19:59:35 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:37.959 19:59:35 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.959 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.959 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.959 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.959 19:59:35 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.959 19:59:35 -- rpc/rpc.sh@13 -- # jq length 00:04:37.959 19:59:35 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.959 19:59:35 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.959 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.959 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.959 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.959 19:59:35 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:37.959 19:59:35 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.959 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.959 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.959 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.959 19:59:35 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.959 { 00:04:37.959 "name": "Malloc2", 00:04:37.959 "aliases": [ 00:04:37.959 "76f33f42-77c9-4e73-ab01-54b3943bda48" 00:04:37.959 ], 00:04:37.959 "product_name": "Malloc disk", 00:04:37.959 "block_size": 512, 00:04:37.959 "num_blocks": 16384, 00:04:37.959 "uuid": "76f33f42-77c9-4e73-ab01-54b3943bda48", 00:04:37.959 "assigned_rate_limits": { 00:04:37.959 "rw_ios_per_sec": 0, 00:04:37.959 "rw_mbytes_per_sec": 0, 00:04:37.959 "r_mbytes_per_sec": 0, 00:04:37.959 "w_mbytes_per_sec": 0 00:04:37.959 }, 00:04:37.959 "claimed": false, 00:04:37.959 "zoned": false, 00:04:37.959 "supported_io_types": { 00:04:37.959 "read": true, 00:04:37.959 "write": true, 00:04:37.959 "unmap": true, 00:04:37.959 "write_zeroes": true, 00:04:37.959 "flush": true, 00:04:37.959 "reset": true, 00:04:37.959 "compare": false, 00:04:37.959 "compare_and_write": false, 00:04:37.959 "abort": true, 00:04:37.959 "nvme_admin": false, 00:04:37.959 "nvme_io": false 00:04:37.959 }, 00:04:37.959 "memory_domains": [ 00:04:37.959 { 00:04:37.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.959 "dma_device_type": 2 00:04:37.959 } 00:04:37.959 ], 00:04:37.959 "driver_specific": {} 00:04:37.959 } 00:04:37.959 ]' 00:04:37.959 19:59:35 -- rpc/rpc.sh@17 -- # jq length 00:04:37.959 19:59:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.959 19:59:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:37.959 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.959 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.959 [2024-04-25 19:59:35.799479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:37.959 [2024-04-25 19:59:35.799528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.959 [2024-04-25 19:59:35.799550] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021380 00:04:37.959 [2024-04-25 19:59:35.799560] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.959 [2024-04-25 19:59:35.801390] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.959 [2024-04-25 19:59:35.801416] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.959 Passthru0 00:04:37.959 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.959 19:59:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.959 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.959 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.959 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.959 19:59:35 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.959 { 00:04:37.959 "name": "Malloc2", 00:04:37.959 "aliases": [ 00:04:37.959 "76f33f42-77c9-4e73-ab01-54b3943bda48" 00:04:37.959 ], 00:04:37.959 "product_name": "Malloc disk", 00:04:37.959 "block_size": 512, 00:04:37.959 "num_blocks": 16384, 00:04:37.959 "uuid": "76f33f42-77c9-4e73-ab01-54b3943bda48", 00:04:37.959 "assigned_rate_limits": { 00:04:37.959 "rw_ios_per_sec": 0, 00:04:37.959 "rw_mbytes_per_sec": 0, 00:04:37.959 "r_mbytes_per_sec": 0, 00:04:37.959 "w_mbytes_per_sec": 0 00:04:37.959 }, 00:04:37.959 "claimed": true, 00:04:37.959 "claim_type": "exclusive_write", 00:04:37.959 "zoned": false, 00:04:37.959 "supported_io_types": { 00:04:37.959 "read": true, 00:04:37.959 "write": true, 00:04:37.959 "unmap": true, 00:04:37.959 "write_zeroes": true, 00:04:37.959 "flush": true, 00:04:37.959 "reset": true, 00:04:37.959 "compare": false, 00:04:37.959 "compare_and_write": false, 00:04:37.959 "abort": true, 00:04:37.959 "nvme_admin": false, 00:04:37.959 "nvme_io": false 00:04:37.959 }, 00:04:37.959 "memory_domains": [ 00:04:37.959 { 00:04:37.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.959 "dma_device_type": 2 00:04:37.959 } 00:04:37.959 ], 00:04:37.959 "driver_specific": {} 00:04:37.959 }, 00:04:37.959 { 00:04:37.959 "name": "Passthru0", 00:04:37.959 "aliases": [ 00:04:37.959 "16527a8f-21de-5672-9c8b-c21cc5846a9c" 00:04:37.959 ], 00:04:37.959 "product_name": "passthru", 00:04:37.959 "block_size": 512, 00:04:37.959 "num_blocks": 16384, 00:04:37.959 "uuid": "16527a8f-21de-5672-9c8b-c21cc5846a9c", 00:04:37.959 "assigned_rate_limits": { 00:04:37.959 "rw_ios_per_sec": 0, 00:04:37.959 "rw_mbytes_per_sec": 0, 00:04:37.959 "r_mbytes_per_sec": 0, 00:04:37.959 "w_mbytes_per_sec": 0 00:04:37.959 }, 00:04:37.959 "claimed": false, 00:04:37.959 "zoned": false, 00:04:37.959 "supported_io_types": { 00:04:37.959 "read": true, 00:04:37.959 "write": true, 00:04:37.959 "unmap": true, 00:04:37.959 "write_zeroes": true, 00:04:37.959 "flush": true, 00:04:37.959 "reset": true, 00:04:37.959 "compare": false, 00:04:37.959 "compare_and_write": false, 00:04:37.959 "abort": true, 00:04:37.959 "nvme_admin": false, 00:04:37.959 "nvme_io": false 00:04:37.959 }, 00:04:37.959 "memory_domains": [ 00:04:37.959 { 00:04:37.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.959 "dma_device_type": 2 00:04:37.959 } 00:04:37.959 ], 00:04:37.959 "driver_specific": { 00:04:37.959 "passthru": { 00:04:37.959 "name": "Passthru0", 00:04:37.959 "base_bdev_name": "Malloc2" 00:04:37.959 } 00:04:37.959 } 00:04:37.959 } 00:04:37.959 ]' 00:04:37.959 19:59:35 -- rpc/rpc.sh@21 -- # jq length 00:04:37.959 19:59:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.959 19:59:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.959 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.959 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.959 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.960 19:59:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:37.960 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.960 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.960 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.960 19:59:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.960 19:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:37.960 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.960 19:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.960 19:59:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.960 19:59:35 -- rpc/rpc.sh@26 -- # jq length 00:04:38.220 19:59:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.220 00:04:38.220 real 0m0.214s 00:04:38.220 user 0m0.125s 00:04:38.220 sys 0m0.031s 00:04:38.220 19:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.220 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:38.220 ************************************ 00:04:38.220 END TEST rpc_daemon_integrity 00:04:38.220 ************************************ 00:04:38.220 19:59:35 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.220 19:59:35 -- rpc/rpc.sh@84 -- # killprocess 1315438 00:04:38.220 19:59:35 -- common/autotest_common.sh@926 -- # '[' -z 1315438 ']' 00:04:38.220 19:59:35 -- common/autotest_common.sh@930 -- # kill -0 1315438 00:04:38.220 19:59:35 -- common/autotest_common.sh@931 -- # uname 00:04:38.220 19:59:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:38.220 19:59:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1315438 00:04:38.220 19:59:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:38.220 19:59:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:38.220 19:59:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1315438' 00:04:38.220 killing process with pid 1315438 00:04:38.220 19:59:35 -- common/autotest_common.sh@945 -- # kill 1315438 00:04:38.220 19:59:35 -- common/autotest_common.sh@950 -- # wait 1315438 00:04:39.167 00:04:39.167 real 0m2.668s 00:04:39.167 user 0m2.982s 00:04:39.167 sys 0m0.721s 00:04:39.167 19:59:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.167 19:59:36 -- common/autotest_common.sh@10 -- # set +x 00:04:39.167 ************************************ 00:04:39.167 END TEST rpc 00:04:39.167 ************************************ 00:04:39.167 19:59:36 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.167 19:59:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.167 19:59:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.167 19:59:36 -- common/autotest_common.sh@10 -- # set +x 00:04:39.167 ************************************ 00:04:39.167 START TEST rpc_client 00:04:39.167 ************************************ 00:04:39.167 19:59:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.167 * Looking for test storage... 00:04:39.167 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:04:39.167 19:59:37 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:39.167 OK 00:04:39.167 19:59:37 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:39.167 00:04:39.167 real 0m0.135s 00:04:39.167 user 0m0.051s 00:04:39.167 sys 0m0.092s 00:04:39.167 19:59:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.167 19:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.167 ************************************ 00:04:39.167 END TEST rpc_client 00:04:39.167 ************************************ 00:04:39.428 19:59:37 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:39.428 19:59:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.428 19:59:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.428 19:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.428 ************************************ 00:04:39.428 START TEST json_config 00:04:39.428 ************************************ 00:04:39.428 19:59:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:39.428 19:59:37 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.428 19:59:37 -- nvmf/common.sh@7 -- # uname -s 00:04:39.428 19:59:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.428 19:59:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.428 19:59:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.428 19:59:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.428 19:59:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.428 19:59:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.428 19:59:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.428 19:59:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.428 19:59:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.428 19:59:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.428 19:59:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:39.428 19:59:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:39.428 19:59:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.428 19:59:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.428 19:59:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.428 19:59:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:39.428 19:59:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.428 19:59:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.428 19:59:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.428 19:59:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.428 19:59:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.428 19:59:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.428 19:59:37 -- paths/export.sh@5 -- # export PATH 00:04:39.428 19:59:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.428 19:59:37 -- nvmf/common.sh@46 -- # : 0 00:04:39.428 19:59:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:39.428 19:59:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:39.428 19:59:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:39.428 19:59:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.428 19:59:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.428 19:59:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:39.428 19:59:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:39.428 19:59:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:39.428 19:59:37 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:39.428 19:59:37 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:39.428 19:59:37 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:39.428 19:59:37 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:39.428 19:59:37 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:39.428 19:59:37 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:39.428 19:59:37 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:39.428 19:59:37 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:39.428 19:59:37 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:39.428 19:59:37 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:39.428 19:59:37 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:04:39.428 19:59:37 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:39.428 19:59:37 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:39.428 19:59:37 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.428 19:59:37 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:39.428 INFO: JSON configuration test init 00:04:39.428 19:59:37 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:39.428 19:59:37 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:39.428 19:59:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:39.428 19:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.428 19:59:37 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:39.429 19:59:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:39.429 19:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.429 19:59:37 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:39.429 19:59:37 -- json_config/json_config.sh@98 -- # local app=target 00:04:39.429 19:59:37 -- json_config/json_config.sh@99 -- # shift 00:04:39.429 19:59:37 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:39.429 19:59:37 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:39.429 19:59:37 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:39.429 19:59:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:39.429 19:59:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:39.429 19:59:37 -- json_config/json_config.sh@111 -- # app_pid[$app]=1316241 00:04:39.429 19:59:37 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:39.429 Waiting for target to run... 00:04:39.429 19:59:37 -- json_config/json_config.sh@114 -- # waitforlisten 1316241 /var/tmp/spdk_tgt.sock 00:04:39.429 19:59:37 -- common/autotest_common.sh@819 -- # '[' -z 1316241 ']' 00:04:39.429 19:59:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.429 19:59:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:39.429 19:59:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.429 19:59:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:39.429 19:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.429 19:59:37 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:39.429 [2024-04-25 19:59:37.255784] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:39.429 [2024-04-25 19:59:37.255918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316241 ] 00:04:39.429 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.690 [2024-04-25 19:59:37.525917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.690 [2024-04-25 19:59:37.604021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:39.690 [2024-04-25 19:59:37.604191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.260 19:59:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:40.260 19:59:37 -- common/autotest_common.sh@852 -- # return 0 00:04:40.260 19:59:37 -- json_config/json_config.sh@115 -- # echo '' 00:04:40.260 00:04:40.260 19:59:37 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:40.260 19:59:37 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:40.260 19:59:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:40.260 19:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:40.260 19:59:37 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:40.260 19:59:37 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:40.260 19:59:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:40.260 19:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:40.260 19:59:37 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:40.260 19:59:37 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:40.260 19:59:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:41.199 19:59:39 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:41.199 19:59:39 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:41.199 19:59:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:41.199 19:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:41.199 19:59:39 -- json_config/json_config.sh@48 -- # local ret=0 00:04:41.199 19:59:39 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:41.199 19:59:39 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:41.199 19:59:39 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:41.199 19:59:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:41.199 19:59:39 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:41.459 19:59:39 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:41.460 19:59:39 -- json_config/json_config.sh@51 -- # local get_types 00:04:41.460 19:59:39 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:41.460 19:59:39 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:41.460 19:59:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:41.460 19:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:41.460 19:59:39 -- json_config/json_config.sh@58 -- # return 0 00:04:41.460 19:59:39 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:41.460 19:59:39 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:41.460 19:59:39 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:41.460 19:59:39 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:41.460 19:59:39 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:41.460 19:59:39 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:41.460 19:59:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:41.460 19:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:41.460 19:59:39 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:41.460 19:59:39 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:41.460 19:59:39 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:41.460 19:59:39 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:41.460 19:59:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:41.720 MallocForNvmf0 00:04:41.720 19:59:39 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:41.720 19:59:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:41.720 MallocForNvmf1 00:04:41.720 19:59:39 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:41.720 19:59:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:41.980 [2024-04-25 19:59:39.738589] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.980 19:59:39 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:41.980 19:59:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:41.980 19:59:39 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:41.980 19:59:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:42.241 19:59:40 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:42.241 19:59:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:42.501 19:59:40 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:42.501 19:59:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:42.501 [2024-04-25 19:59:40.327126] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:42.501 19:59:40 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:42.501 19:59:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:42.501 19:59:40 -- common/autotest_common.sh@10 -- # set +x 00:04:42.501 19:59:40 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:42.501 19:59:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:42.501 19:59:40 -- common/autotest_common.sh@10 -- # set +x 00:04:42.501 19:59:40 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:42.501 19:59:40 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:42.501 19:59:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:42.761 MallocBdevForConfigChangeCheck 00:04:42.761 19:59:40 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:42.761 19:59:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:42.761 19:59:40 -- common/autotest_common.sh@10 -- # set +x 00:04:42.761 19:59:40 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:42.761 19:59:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.022 19:59:40 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:43.023 INFO: shutting down applications... 00:04:43.023 19:59:40 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:43.023 19:59:40 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:43.023 19:59:40 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:43.023 19:59:40 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:44.933 Calling clear_iscsi_subsystem 00:04:44.933 Calling clear_nvmf_subsystem 00:04:44.933 Calling clear_nbd_subsystem 00:04:44.933 Calling clear_ublk_subsystem 00:04:44.933 Calling clear_vhost_blk_subsystem 00:04:44.933 Calling clear_vhost_scsi_subsystem 00:04:44.933 Calling clear_scheduler_subsystem 00:04:44.934 Calling clear_bdev_subsystem 00:04:44.934 Calling clear_accel_subsystem 00:04:44.934 Calling clear_vmd_subsystem 00:04:44.934 Calling clear_sock_subsystem 00:04:44.934 Calling clear_iobuf_subsystem 00:04:44.934 19:59:42 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:04:44.934 19:59:42 -- json_config/json_config.sh@396 -- # count=100 00:04:44.934 19:59:42 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:44.934 19:59:42 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:44.934 19:59:42 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.934 19:59:42 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:45.231 19:59:43 -- json_config/json_config.sh@398 -- # break 00:04:45.231 19:59:43 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:45.231 19:59:43 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:45.231 19:59:43 -- json_config/json_config.sh@120 -- # local app=target 00:04:45.231 19:59:43 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:45.231 19:59:43 -- json_config/json_config.sh@124 -- # [[ -n 1316241 ]] 00:04:45.231 19:59:43 -- json_config/json_config.sh@127 -- # kill -SIGINT 1316241 00:04:45.231 19:59:43 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:45.231 19:59:43 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:45.231 19:59:43 -- json_config/json_config.sh@130 -- # kill -0 1316241 00:04:45.231 19:59:43 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:45.816 19:59:43 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:45.816 19:59:43 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:45.816 19:59:43 -- json_config/json_config.sh@130 -- # kill -0 1316241 00:04:45.816 19:59:43 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:45.816 19:59:43 -- json_config/json_config.sh@132 -- # break 00:04:45.816 19:59:43 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:45.816 19:59:43 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:45.816 SPDK target shutdown done 00:04:45.816 19:59:43 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:45.816 INFO: relaunching applications... 00:04:45.816 19:59:43 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.816 19:59:43 -- json_config/json_config.sh@98 -- # local app=target 00:04:45.816 19:59:43 -- json_config/json_config.sh@99 -- # shift 00:04:45.816 19:59:43 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:45.816 19:59:43 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:45.816 19:59:43 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:45.816 19:59:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:45.817 19:59:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:45.817 19:59:43 -- json_config/json_config.sh@111 -- # app_pid[$app]=1317573 00:04:45.817 19:59:43 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:45.817 Waiting for target to run... 00:04:45.817 19:59:43 -- json_config/json_config.sh@114 -- # waitforlisten 1317573 /var/tmp/spdk_tgt.sock 00:04:45.817 19:59:43 -- common/autotest_common.sh@819 -- # '[' -z 1317573 ']' 00:04:45.817 19:59:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.817 19:59:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:45.817 19:59:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.817 19:59:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:45.817 19:59:43 -- common/autotest_common.sh@10 -- # set +x 00:04:45.817 19:59:43 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.817 [2024-04-25 19:59:43.743826] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:45.817 [2024-04-25 19:59:43.743968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317573 ] 00:04:46.078 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.337 [2024-04-25 19:59:44.239477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.598 [2024-04-25 19:59:44.331670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.598 [2024-04-25 19:59:44.331893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.986 [2024-04-25 19:59:45.474708] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.986 [2024-04-25 19:59:45.506999] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.986 19:59:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.986 19:59:45 -- common/autotest_common.sh@852 -- # return 0 00:04:47.986 19:59:45 -- json_config/json_config.sh@115 -- # echo '' 00:04:47.986 00:04:47.986 19:59:45 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:47.986 19:59:45 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:47.986 INFO: Checking if target configuration is the same... 00:04:47.986 19:59:45 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.986 19:59:45 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:47.986 19:59:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.986 + '[' 2 -ne 2 ']' 00:04:47.986 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:47.986 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:47.986 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:47.986 +++ basename /dev/fd/62 00:04:47.986 ++ mktemp /tmp/62.XXX 00:04:47.986 + tmp_file_1=/tmp/62.oKE 00:04:47.986 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.986 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.986 + tmp_file_2=/tmp/spdk_tgt_config.json.hUK 00:04:47.986 + ret=0 00:04:47.986 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.247 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.247 + diff -u /tmp/62.oKE /tmp/spdk_tgt_config.json.hUK 00:04:48.247 + echo 'INFO: JSON config files are the same' 00:04:48.247 INFO: JSON config files are the same 00:04:48.247 + rm /tmp/62.oKE /tmp/spdk_tgt_config.json.hUK 00:04:48.247 + exit 0 00:04:48.247 19:59:46 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:48.247 19:59:46 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:48.247 INFO: changing configuration and checking if this can be detected... 00:04:48.247 19:59:46 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.247 19:59:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.509 19:59:46 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.509 19:59:46 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:48.509 19:59:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.509 + '[' 2 -ne 2 ']' 00:04:48.509 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:48.509 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:48.509 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:48.509 +++ basename /dev/fd/62 00:04:48.509 ++ mktemp /tmp/62.XXX 00:04:48.509 + tmp_file_1=/tmp/62.Ybo 00:04:48.509 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.509 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.509 + tmp_file_2=/tmp/spdk_tgt_config.json.9EX 00:04:48.509 + ret=0 00:04:48.509 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.770 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.770 + diff -u /tmp/62.Ybo /tmp/spdk_tgt_config.json.9EX 00:04:48.770 + ret=1 00:04:48.770 + echo '=== Start of file: /tmp/62.Ybo ===' 00:04:48.770 + cat /tmp/62.Ybo 00:04:48.770 + echo '=== End of file: /tmp/62.Ybo ===' 00:04:48.770 + echo '' 00:04:48.770 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9EX ===' 00:04:48.770 + cat /tmp/spdk_tgt_config.json.9EX 00:04:48.770 + echo '=== End of file: /tmp/spdk_tgt_config.json.9EX ===' 00:04:48.770 + echo '' 00:04:48.770 + rm /tmp/62.Ybo /tmp/spdk_tgt_config.json.9EX 00:04:48.770 + exit 1 00:04:48.770 19:59:46 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:48.770 INFO: configuration change detected. 00:04:48.770 19:59:46 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:48.770 19:59:46 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:48.770 19:59:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:48.770 19:59:46 -- common/autotest_common.sh@10 -- # set +x 00:04:48.770 19:59:46 -- json_config/json_config.sh@360 -- # local ret=0 00:04:48.770 19:59:46 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:48.770 19:59:46 -- json_config/json_config.sh@370 -- # [[ -n 1317573 ]] 00:04:48.770 19:59:46 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:48.770 19:59:46 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:48.770 19:59:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:48.770 19:59:46 -- common/autotest_common.sh@10 -- # set +x 00:04:48.770 19:59:46 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:48.770 19:59:46 -- json_config/json_config.sh@246 -- # uname -s 00:04:48.770 19:59:46 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:48.770 19:59:46 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:48.770 19:59:46 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:48.770 19:59:46 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:48.770 19:59:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:48.770 19:59:46 -- common/autotest_common.sh@10 -- # set +x 00:04:48.770 19:59:46 -- json_config/json_config.sh@376 -- # killprocess 1317573 00:04:48.770 19:59:46 -- common/autotest_common.sh@926 -- # '[' -z 1317573 ']' 00:04:48.770 19:59:46 -- common/autotest_common.sh@930 -- # kill -0 1317573 00:04:48.770 19:59:46 -- common/autotest_common.sh@931 -- # uname 00:04:48.770 19:59:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:48.770 19:59:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1317573 00:04:48.770 19:59:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:48.770 19:59:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:48.770 19:59:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1317573' 00:04:48.770 killing process with pid 1317573 00:04:48.770 19:59:46 -- common/autotest_common.sh@945 -- # kill 1317573 00:04:48.770 19:59:46 -- common/autotest_common.sh@950 -- # wait 1317573 00:04:50.152 19:59:48 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.152 19:59:48 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:50.152 19:59:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:50.152 19:59:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.152 19:59:48 -- json_config/json_config.sh@381 -- # return 0 00:04:50.152 19:59:48 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:50.152 INFO: Success 00:04:50.152 00:04:50.152 real 0m10.932s 00:04:50.152 user 0m11.619s 00:04:50.152 sys 0m2.078s 00:04:50.152 19:59:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.152 19:59:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.152 ************************************ 00:04:50.152 END TEST json_config 00:04:50.152 ************************************ 00:04:50.152 19:59:48 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.152 19:59:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.152 19:59:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.152 19:59:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.152 ************************************ 00:04:50.152 START TEST json_config_extra_key 00:04:50.152 ************************************ 00:04:50.152 19:59:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.413 19:59:48 -- nvmf/common.sh@7 -- # uname -s 00:04:50.413 19:59:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.413 19:59:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.413 19:59:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.413 19:59:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.413 19:59:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.413 19:59:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.413 19:59:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.413 19:59:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.413 19:59:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.413 19:59:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.413 19:59:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:50.413 19:59:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:50.413 19:59:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.413 19:59:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.413 19:59:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.413 19:59:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:50.413 19:59:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.413 19:59:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.413 19:59:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.413 19:59:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.413 19:59:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.413 19:59:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.413 19:59:48 -- paths/export.sh@5 -- # export PATH 00:04:50.413 19:59:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.413 19:59:48 -- nvmf/common.sh@46 -- # : 0 00:04:50.413 19:59:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:50.413 19:59:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:50.413 19:59:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:50.413 19:59:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.413 19:59:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.413 19:59:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:50.413 19:59:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:50.413 19:59:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:50.413 19:59:48 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:50.414 INFO: launching applications... 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1318592 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:50.414 Waiting for target to run... 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1318592 /var/tmp/spdk_tgt.sock 00:04:50.414 19:59:48 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.414 19:59:48 -- common/autotest_common.sh@819 -- # '[' -z 1318592 ']' 00:04:50.414 19:59:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.414 19:59:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.414 19:59:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.414 19:59:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.414 19:59:48 -- common/autotest_common.sh@10 -- # set +x 00:04:50.414 [2024-04-25 19:59:48.197889] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:50.414 [2024-04-25 19:59:48.197988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318592 ] 00:04:50.414 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.674 [2024-04-25 19:59:48.464562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.674 [2024-04-25 19:59:48.544444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.674 [2024-04-25 19:59:48.544630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.244 19:59:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:51.244 19:59:48 -- common/autotest_common.sh@852 -- # return 0 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:51.244 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:51.244 INFO: shutting down applications... 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1318592 ]] 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1318592 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1318592 00:04:51.244 19:59:48 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:51.504 19:59:49 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:51.504 19:59:49 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:51.504 19:59:49 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1318592 00:04:51.504 19:59:49 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:52.077 19:59:49 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:52.077 19:59:49 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:52.077 19:59:49 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1318592 00:04:52.077 19:59:49 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:52.077 19:59:49 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:52.077 19:59:49 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:52.077 19:59:49 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:52.077 SPDK target shutdown done 00:04:52.077 19:59:49 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:52.077 Success 00:04:52.077 00:04:52.077 real 0m1.852s 00:04:52.077 user 0m1.640s 00:04:52.077 sys 0m0.396s 00:04:52.077 19:59:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.077 19:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:52.077 ************************************ 00:04:52.077 END TEST json_config_extra_key 00:04:52.077 ************************************ 00:04:52.077 19:59:49 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.077 19:59:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.077 19:59:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.077 19:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:52.077 ************************************ 00:04:52.077 START TEST alias_rpc 00:04:52.077 ************************************ 00:04:52.077 19:59:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.337 * Looking for test storage... 00:04:52.337 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:04:52.337 19:59:50 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.337 19:59:50 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1318949 00:04:52.337 19:59:50 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1318949 00:04:52.337 19:59:50 -- common/autotest_common.sh@819 -- # '[' -z 1318949 ']' 00:04:52.337 19:59:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.337 19:59:50 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.337 19:59:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:52.337 19:59:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.337 19:59:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:52.337 19:59:50 -- common/autotest_common.sh@10 -- # set +x 00:04:52.337 [2024-04-25 19:59:50.115521] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:52.337 [2024-04-25 19:59:50.115658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318949 ] 00:04:52.337 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.337 [2024-04-25 19:59:50.233506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.599 [2024-04-25 19:59:50.333318] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.599 [2024-04-25 19:59:50.333515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.169 19:59:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:53.169 19:59:50 -- common/autotest_common.sh@852 -- # return 0 00:04:53.169 19:59:50 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:53.169 19:59:51 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1318949 00:04:53.169 19:59:51 -- common/autotest_common.sh@926 -- # '[' -z 1318949 ']' 00:04:53.170 19:59:51 -- common/autotest_common.sh@930 -- # kill -0 1318949 00:04:53.170 19:59:51 -- common/autotest_common.sh@931 -- # uname 00:04:53.170 19:59:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:53.170 19:59:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1318949 00:04:53.430 19:59:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:53.430 19:59:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:53.430 19:59:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1318949' 00:04:53.430 killing process with pid 1318949 00:04:53.430 19:59:51 -- common/autotest_common.sh@945 -- # kill 1318949 00:04:53.430 19:59:51 -- common/autotest_common.sh@950 -- # wait 1318949 00:04:54.000 00:04:54.000 real 0m1.955s 00:04:54.000 user 0m2.008s 00:04:54.000 sys 0m0.454s 00:04:54.000 19:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.000 19:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:54.000 ************************************ 00:04:54.000 END TEST alias_rpc 00:04:54.000 ************************************ 00:04:54.259 19:59:51 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:54.259 19:59:51 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:54.259 19:59:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.259 19:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.259 19:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:54.259 ************************************ 00:04:54.259 START TEST spdkcli_tcp 00:04:54.259 ************************************ 00:04:54.259 19:59:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:54.259 * Looking for test storage... 00:04:54.259 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:04:54.260 19:59:52 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:04:54.260 19:59:52 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:54.260 19:59:52 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:04:54.260 19:59:52 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.260 19:59:52 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.260 19:59:52 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.260 19:59:52 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.260 19:59:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:54.260 19:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.260 19:59:52 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1319418 00:04:54.260 19:59:52 -- spdkcli/tcp.sh@27 -- # waitforlisten 1319418 00:04:54.260 19:59:52 -- common/autotest_common.sh@819 -- # '[' -z 1319418 ']' 00:04:54.260 19:59:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.260 19:59:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.260 19:59:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.260 19:59:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.260 19:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.260 19:59:52 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.260 [2024-04-25 19:59:52.110094] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:54.260 [2024-04-25 19:59:52.110229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1319418 ] 00:04:54.260 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.520 [2024-04-25 19:59:52.227161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.520 [2024-04-25 19:59:52.325449] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.520 [2024-04-25 19:59:52.325714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.520 [2024-04-25 19:59:52.325714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.092 19:59:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.092 19:59:52 -- common/autotest_common.sh@852 -- # return 0 00:04:55.092 19:59:52 -- spdkcli/tcp.sh@31 -- # socat_pid=1319602 00:04:55.092 19:59:52 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:55.092 19:59:52 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:55.092 [ 00:04:55.092 "bdev_malloc_delete", 00:04:55.092 "bdev_malloc_create", 00:04:55.092 "bdev_null_resize", 00:04:55.092 "bdev_null_delete", 00:04:55.092 "bdev_null_create", 00:04:55.092 "bdev_nvme_cuse_unregister", 00:04:55.092 "bdev_nvme_cuse_register", 00:04:55.092 "bdev_opal_new_user", 00:04:55.092 "bdev_opal_set_lock_state", 00:04:55.092 "bdev_opal_delete", 00:04:55.092 "bdev_opal_get_info", 00:04:55.092 "bdev_opal_create", 00:04:55.092 "bdev_nvme_opal_revert", 00:04:55.092 "bdev_nvme_opal_init", 00:04:55.092 "bdev_nvme_send_cmd", 00:04:55.092 "bdev_nvme_get_path_iostat", 00:04:55.092 "bdev_nvme_get_mdns_discovery_info", 00:04:55.092 "bdev_nvme_stop_mdns_discovery", 00:04:55.092 "bdev_nvme_start_mdns_discovery", 00:04:55.092 "bdev_nvme_set_multipath_policy", 00:04:55.092 "bdev_nvme_set_preferred_path", 00:04:55.092 "bdev_nvme_get_io_paths", 00:04:55.092 "bdev_nvme_remove_error_injection", 00:04:55.092 "bdev_nvme_add_error_injection", 00:04:55.092 "bdev_nvme_get_discovery_info", 00:04:55.092 "bdev_nvme_stop_discovery", 00:04:55.092 "bdev_nvme_start_discovery", 00:04:55.092 "bdev_nvme_get_controller_health_info", 00:04:55.092 "bdev_nvme_disable_controller", 00:04:55.092 "bdev_nvme_enable_controller", 00:04:55.092 "bdev_nvme_reset_controller", 00:04:55.092 "bdev_nvme_get_transport_statistics", 00:04:55.092 "bdev_nvme_apply_firmware", 00:04:55.092 "bdev_nvme_detach_controller", 00:04:55.092 "bdev_nvme_get_controllers", 00:04:55.092 "bdev_nvme_attach_controller", 00:04:55.092 "bdev_nvme_set_hotplug", 00:04:55.092 "bdev_nvme_set_options", 00:04:55.092 "bdev_passthru_delete", 00:04:55.092 "bdev_passthru_create", 00:04:55.092 "bdev_lvol_grow_lvstore", 00:04:55.092 "bdev_lvol_get_lvols", 00:04:55.092 "bdev_lvol_get_lvstores", 00:04:55.092 "bdev_lvol_delete", 00:04:55.092 "bdev_lvol_set_read_only", 00:04:55.092 "bdev_lvol_resize", 00:04:55.092 "bdev_lvol_decouple_parent", 00:04:55.092 "bdev_lvol_inflate", 00:04:55.092 "bdev_lvol_rename", 00:04:55.092 "bdev_lvol_clone_bdev", 00:04:55.092 "bdev_lvol_clone", 00:04:55.092 "bdev_lvol_snapshot", 00:04:55.092 "bdev_lvol_create", 00:04:55.092 "bdev_lvol_delete_lvstore", 00:04:55.092 "bdev_lvol_rename_lvstore", 00:04:55.092 "bdev_lvol_create_lvstore", 00:04:55.092 "bdev_raid_set_options", 00:04:55.092 "bdev_raid_remove_base_bdev", 00:04:55.092 "bdev_raid_add_base_bdev", 00:04:55.092 "bdev_raid_delete", 00:04:55.092 "bdev_raid_create", 00:04:55.092 "bdev_raid_get_bdevs", 00:04:55.092 "bdev_error_inject_error", 00:04:55.092 "bdev_error_delete", 00:04:55.092 "bdev_error_create", 00:04:55.092 "bdev_split_delete", 00:04:55.092 "bdev_split_create", 00:04:55.092 "bdev_delay_delete", 00:04:55.092 "bdev_delay_create", 00:04:55.092 "bdev_delay_update_latency", 00:04:55.092 "bdev_zone_block_delete", 00:04:55.092 "bdev_zone_block_create", 00:04:55.092 "blobfs_create", 00:04:55.092 "blobfs_detect", 00:04:55.092 "blobfs_set_cache_size", 00:04:55.092 "bdev_aio_delete", 00:04:55.092 "bdev_aio_rescan", 00:04:55.092 "bdev_aio_create", 00:04:55.092 "bdev_ftl_set_property", 00:04:55.092 "bdev_ftl_get_properties", 00:04:55.092 "bdev_ftl_get_stats", 00:04:55.092 "bdev_ftl_unmap", 00:04:55.092 "bdev_ftl_unload", 00:04:55.092 "bdev_ftl_delete", 00:04:55.092 "bdev_ftl_load", 00:04:55.092 "bdev_ftl_create", 00:04:55.092 "bdev_virtio_attach_controller", 00:04:55.092 "bdev_virtio_scsi_get_devices", 00:04:55.092 "bdev_virtio_detach_controller", 00:04:55.092 "bdev_virtio_blk_set_hotplug", 00:04:55.092 "bdev_iscsi_delete", 00:04:55.092 "bdev_iscsi_create", 00:04:55.092 "bdev_iscsi_set_options", 00:04:55.092 "accel_error_inject_error", 00:04:55.092 "ioat_scan_accel_module", 00:04:55.092 "dsa_scan_accel_module", 00:04:55.092 "iaa_scan_accel_module", 00:04:55.092 "iscsi_set_options", 00:04:55.092 "iscsi_get_auth_groups", 00:04:55.092 "iscsi_auth_group_remove_secret", 00:04:55.092 "iscsi_auth_group_add_secret", 00:04:55.092 "iscsi_delete_auth_group", 00:04:55.092 "iscsi_create_auth_group", 00:04:55.092 "iscsi_set_discovery_auth", 00:04:55.092 "iscsi_get_options", 00:04:55.092 "iscsi_target_node_request_logout", 00:04:55.092 "iscsi_target_node_set_redirect", 00:04:55.092 "iscsi_target_node_set_auth", 00:04:55.092 "iscsi_target_node_add_lun", 00:04:55.092 "iscsi_get_connections", 00:04:55.092 "iscsi_portal_group_set_auth", 00:04:55.092 "iscsi_start_portal_group", 00:04:55.092 "iscsi_delete_portal_group", 00:04:55.092 "iscsi_create_portal_group", 00:04:55.092 "iscsi_get_portal_groups", 00:04:55.092 "iscsi_delete_target_node", 00:04:55.092 "iscsi_target_node_remove_pg_ig_maps", 00:04:55.092 "iscsi_target_node_add_pg_ig_maps", 00:04:55.092 "iscsi_create_target_node", 00:04:55.092 "iscsi_get_target_nodes", 00:04:55.092 "iscsi_delete_initiator_group", 00:04:55.092 "iscsi_initiator_group_remove_initiators", 00:04:55.092 "iscsi_initiator_group_add_initiators", 00:04:55.092 "iscsi_create_initiator_group", 00:04:55.092 "iscsi_get_initiator_groups", 00:04:55.092 "nvmf_set_crdt", 00:04:55.092 "nvmf_set_config", 00:04:55.092 "nvmf_set_max_subsystems", 00:04:55.092 "nvmf_subsystem_get_listeners", 00:04:55.092 "nvmf_subsystem_get_qpairs", 00:04:55.092 "nvmf_subsystem_get_controllers", 00:04:55.092 "nvmf_get_stats", 00:04:55.092 "nvmf_get_transports", 00:04:55.092 "nvmf_create_transport", 00:04:55.092 "nvmf_get_targets", 00:04:55.092 "nvmf_delete_target", 00:04:55.092 "nvmf_create_target", 00:04:55.092 "nvmf_subsystem_allow_any_host", 00:04:55.092 "nvmf_subsystem_remove_host", 00:04:55.092 "nvmf_subsystem_add_host", 00:04:55.092 "nvmf_subsystem_remove_ns", 00:04:55.092 "nvmf_subsystem_add_ns", 00:04:55.092 "nvmf_subsystem_listener_set_ana_state", 00:04:55.092 "nvmf_discovery_get_referrals", 00:04:55.092 "nvmf_discovery_remove_referral", 00:04:55.092 "nvmf_discovery_add_referral", 00:04:55.092 "nvmf_subsystem_remove_listener", 00:04:55.092 "nvmf_subsystem_add_listener", 00:04:55.092 "nvmf_delete_subsystem", 00:04:55.092 "nvmf_create_subsystem", 00:04:55.092 "nvmf_get_subsystems", 00:04:55.092 "env_dpdk_get_mem_stats", 00:04:55.092 "nbd_get_disks", 00:04:55.092 "nbd_stop_disk", 00:04:55.092 "nbd_start_disk", 00:04:55.092 "ublk_recover_disk", 00:04:55.092 "ublk_get_disks", 00:04:55.092 "ublk_stop_disk", 00:04:55.092 "ublk_start_disk", 00:04:55.092 "ublk_destroy_target", 00:04:55.092 "ublk_create_target", 00:04:55.092 "virtio_blk_create_transport", 00:04:55.092 "virtio_blk_get_transports", 00:04:55.092 "vhost_controller_set_coalescing", 00:04:55.092 "vhost_get_controllers", 00:04:55.092 "vhost_delete_controller", 00:04:55.092 "vhost_create_blk_controller", 00:04:55.092 "vhost_scsi_controller_remove_target", 00:04:55.092 "vhost_scsi_controller_add_target", 00:04:55.092 "vhost_start_scsi_controller", 00:04:55.092 "vhost_create_scsi_controller", 00:04:55.092 "thread_set_cpumask", 00:04:55.092 "framework_get_scheduler", 00:04:55.092 "framework_set_scheduler", 00:04:55.092 "framework_get_reactors", 00:04:55.092 "thread_get_io_channels", 00:04:55.092 "thread_get_pollers", 00:04:55.092 "thread_get_stats", 00:04:55.092 "framework_monitor_context_switch", 00:04:55.092 "spdk_kill_instance", 00:04:55.092 "log_enable_timestamps", 00:04:55.092 "log_get_flags", 00:04:55.092 "log_clear_flag", 00:04:55.092 "log_set_flag", 00:04:55.092 "log_get_level", 00:04:55.092 "log_set_level", 00:04:55.092 "log_get_print_level", 00:04:55.092 "log_set_print_level", 00:04:55.092 "framework_enable_cpumask_locks", 00:04:55.092 "framework_disable_cpumask_locks", 00:04:55.092 "framework_wait_init", 00:04:55.092 "framework_start_init", 00:04:55.092 "scsi_get_devices", 00:04:55.092 "bdev_get_histogram", 00:04:55.092 "bdev_enable_histogram", 00:04:55.092 "bdev_set_qos_limit", 00:04:55.092 "bdev_set_qd_sampling_period", 00:04:55.092 "bdev_get_bdevs", 00:04:55.092 "bdev_reset_iostat", 00:04:55.092 "bdev_get_iostat", 00:04:55.092 "bdev_examine", 00:04:55.092 "bdev_wait_for_examine", 00:04:55.092 "bdev_set_options", 00:04:55.092 "notify_get_notifications", 00:04:55.092 "notify_get_types", 00:04:55.092 "accel_get_stats", 00:04:55.092 "accel_set_options", 00:04:55.092 "accel_set_driver", 00:04:55.092 "accel_crypto_key_destroy", 00:04:55.092 "accel_crypto_keys_get", 00:04:55.092 "accel_crypto_key_create", 00:04:55.092 "accel_assign_opc", 00:04:55.092 "accel_get_module_info", 00:04:55.092 "accel_get_opc_assignments", 00:04:55.093 "vmd_rescan", 00:04:55.093 "vmd_remove_device", 00:04:55.093 "vmd_enable", 00:04:55.093 "sock_set_default_impl", 00:04:55.093 "sock_impl_set_options", 00:04:55.093 "sock_impl_get_options", 00:04:55.093 "iobuf_get_stats", 00:04:55.093 "iobuf_set_options", 00:04:55.093 "framework_get_pci_devices", 00:04:55.093 "framework_get_config", 00:04:55.093 "framework_get_subsystems", 00:04:55.093 "trace_get_info", 00:04:55.093 "trace_get_tpoint_group_mask", 00:04:55.093 "trace_disable_tpoint_group", 00:04:55.093 "trace_enable_tpoint_group", 00:04:55.093 "trace_clear_tpoint_mask", 00:04:55.093 "trace_set_tpoint_mask", 00:04:55.093 "spdk_get_version", 00:04:55.093 "rpc_get_methods" 00:04:55.093 ] 00:04:55.093 19:59:53 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:55.093 19:59:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:55.093 19:59:53 -- common/autotest_common.sh@10 -- # set +x 00:04:55.353 19:59:53 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:55.353 19:59:53 -- spdkcli/tcp.sh@38 -- # killprocess 1319418 00:04:55.353 19:59:53 -- common/autotest_common.sh@926 -- # '[' -z 1319418 ']' 00:04:55.353 19:59:53 -- common/autotest_common.sh@930 -- # kill -0 1319418 00:04:55.353 19:59:53 -- common/autotest_common.sh@931 -- # uname 00:04:55.353 19:59:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:55.353 19:59:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1319418 00:04:55.353 19:59:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:55.353 19:59:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:55.353 19:59:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1319418' 00:04:55.353 killing process with pid 1319418 00:04:55.353 19:59:53 -- common/autotest_common.sh@945 -- # kill 1319418 00:04:55.353 19:59:53 -- common/autotest_common.sh@950 -- # wait 1319418 00:04:56.294 00:04:56.294 real 0m2.010s 00:04:56.294 user 0m3.529s 00:04:56.294 sys 0m0.482s 00:04:56.294 19:59:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.294 19:59:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.294 ************************************ 00:04:56.294 END TEST spdkcli_tcp 00:04:56.294 ************************************ 00:04:56.294 19:59:54 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.294 19:59:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.294 19:59:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.294 19:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.294 ************************************ 00:04:56.294 START TEST dpdk_mem_utility 00:04:56.294 ************************************ 00:04:56.294 19:59:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.294 * Looking for test storage... 00:04:56.294 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:04:56.295 19:59:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:56.295 19:59:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1319965 00:04:56.295 19:59:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1319965 00:04:56.295 19:59:54 -- common/autotest_common.sh@819 -- # '[' -z 1319965 ']' 00:04:56.295 19:59:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.295 19:59:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:56.295 19:59:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.295 19:59:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:56.295 19:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.295 19:59:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.295 [2024-04-25 19:59:54.190894] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:56.295 [2024-04-25 19:59:54.191035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1319965 ] 00:04:56.555 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.555 [2024-04-25 19:59:54.321675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.555 [2024-04-25 19:59:54.416782] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.555 [2024-04-25 19:59:54.417003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.126 19:59:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:57.126 19:59:54 -- common/autotest_common.sh@852 -- # return 0 00:04:57.126 19:59:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:57.126 19:59:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:57.126 19:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:57.126 19:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:57.126 { 00:04:57.126 "filename": "/tmp/spdk_mem_dump.txt" 00:04:57.126 } 00:04:57.126 19:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:57.126 19:59:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:57.126 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:57.126 1 heaps totaling size 820.000000 MiB 00:04:57.126 size: 820.000000 MiB heap id: 0 00:04:57.126 end heaps---------- 00:04:57.126 8 mempools totaling size 598.116089 MiB 00:04:57.126 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:57.126 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:57.126 size: 84.521057 MiB name: bdev_io_1319965 00:04:57.126 size: 51.011292 MiB name: evtpool_1319965 00:04:57.126 size: 50.003479 MiB name: msgpool_1319965 00:04:57.126 size: 21.763794 MiB name: PDU_Pool 00:04:57.126 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:57.126 size: 0.026123 MiB name: Session_Pool 00:04:57.126 end mempools------- 00:04:57.126 6 memzones totaling size 4.142822 MiB 00:04:57.126 size: 1.000366 MiB name: RG_ring_0_1319965 00:04:57.126 size: 1.000366 MiB name: RG_ring_1_1319965 00:04:57.126 size: 1.000366 MiB name: RG_ring_4_1319965 00:04:57.126 size: 1.000366 MiB name: RG_ring_5_1319965 00:04:57.126 size: 0.125366 MiB name: RG_ring_2_1319965 00:04:57.126 size: 0.015991 MiB name: RG_ring_3_1319965 00:04:57.126 end memzones------- 00:04:57.126 19:59:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:57.126 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:04:57.126 list of free elements. size: 18.514832 MiB 00:04:57.126 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:57.126 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:57.126 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:57.126 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:57.126 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:57.126 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:57.126 element at address: 0x200019600000 with size: 0.999329 MiB 00:04:57.126 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:57.126 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:57.126 element at address: 0x200018e00000 with size: 0.959900 MiB 00:04:57.126 element at address: 0x200019900040 with size: 0.937256 MiB 00:04:57.126 element at address: 0x200000200000 with size: 0.840942 MiB 00:04:57.126 element at address: 0x20001b000000 with size: 0.583191 MiB 00:04:57.126 element at address: 0x200019200000 with size: 0.491150 MiB 00:04:57.126 element at address: 0x200019a00000 with size: 0.485657 MiB 00:04:57.126 element at address: 0x200013800000 with size: 0.470581 MiB 00:04:57.126 element at address: 0x200028400000 with size: 0.411072 MiB 00:04:57.126 element at address: 0x200003a00000 with size: 0.356140 MiB 00:04:57.126 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:04:57.126 list of standard malloc elements. size: 199.220764 MiB 00:04:57.126 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:57.126 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:57.126 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:57.126 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:57.126 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:57.126 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:57.126 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:57.126 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:57.126 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:04:57.126 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:04:57.126 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:04:57.126 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:04:57.126 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:04:57.126 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:57.126 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:57.126 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:57.126 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:57.126 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:57.126 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:57.126 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:57.126 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:04:57.126 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:04:57.126 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:04:57.126 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:04:57.126 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:04:57.127 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:04:57.127 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:57.127 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:57.127 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:57.127 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:57.127 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:57.127 list of memzone associated elements. size: 602.264404 MiB 00:04:57.127 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:57.127 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:57.127 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:57.127 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:57.127 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:57.127 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1319965_0 00:04:57.127 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:57.127 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1319965_0 00:04:57.127 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:57.127 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1319965_0 00:04:57.127 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:57.127 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:57.127 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:57.127 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:57.127 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:57.127 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1319965 00:04:57.127 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:57.127 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1319965 00:04:57.127 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:57.127 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1319965 00:04:57.127 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:57.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:57.127 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:57.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:57.127 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:57.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:57.127 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:57.127 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:57.127 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:57.127 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1319965 00:04:57.127 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:57.127 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1319965 00:04:57.127 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:57.127 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1319965 00:04:57.127 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:57.127 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1319965 00:04:57.127 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:57.127 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1319965 00:04:57.127 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:04:57.127 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:57.127 element at address: 0x200013878780 with size: 0.500549 MiB 00:04:57.127 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:57.127 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:04:57.127 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:57.127 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:57.127 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1319965 00:04:57.127 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:04:57.127 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:57.127 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:04:57.127 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:57.127 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:57.127 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1319965 00:04:57.127 element at address: 0x20002846f540 with size: 0.002502 MiB 00:04:57.127 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:57.127 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:04:57.127 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1319965 00:04:57.127 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:57.127 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1319965 00:04:57.127 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:04:57.127 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:57.127 19:59:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:57.127 19:59:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1319965 00:04:57.127 19:59:55 -- common/autotest_common.sh@926 -- # '[' -z 1319965 ']' 00:04:57.127 19:59:55 -- common/autotest_common.sh@930 -- # kill -0 1319965 00:04:57.127 19:59:55 -- common/autotest_common.sh@931 -- # uname 00:04:57.388 19:59:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:57.388 19:59:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1319965 00:04:57.388 19:59:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:57.388 19:59:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:57.388 19:59:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1319965' 00:04:57.388 killing process with pid 1319965 00:04:57.388 19:59:55 -- common/autotest_common.sh@945 -- # kill 1319965 00:04:57.388 19:59:55 -- common/autotest_common.sh@950 -- # wait 1319965 00:04:58.331 00:04:58.331 real 0m2.029s 00:04:58.331 user 0m2.008s 00:04:58.331 sys 0m0.497s 00:04:58.331 19:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.331 19:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:58.331 ************************************ 00:04:58.331 END TEST dpdk_mem_utility 00:04:58.331 ************************************ 00:04:58.331 19:59:56 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:58.331 19:59:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.331 19:59:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.331 19:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:58.331 ************************************ 00:04:58.331 START TEST event 00:04:58.331 ************************************ 00:04:58.331 19:59:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:58.331 * Looking for test storage... 00:04:58.331 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:04:58.331 19:59:56 -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:58.331 19:59:56 -- bdev/nbd_common.sh@6 -- # set -e 00:04:58.331 19:59:56 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.331 19:59:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:58.331 19:59:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.331 19:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:58.331 ************************************ 00:04:58.331 START TEST event_perf 00:04:58.331 ************************************ 00:04:58.331 19:59:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.331 Running I/O for 1 seconds...[2024-04-25 19:59:56.197715] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:58.331 [2024-04-25 19:59:56.197838] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320323 ] 00:04:58.592 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.592 [2024-04-25 19:59:56.315424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.592 [2024-04-25 19:59:56.407987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.592 [2024-04-25 19:59:56.408140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.592 [2024-04-25 19:59:56.408236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.592 [2024-04-25 19:59:56.408250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.976 Running I/O for 1 seconds... 00:04:59.976 lcore 0: 148076 00:04:59.976 lcore 1: 148074 00:04:59.976 lcore 2: 148074 00:04:59.976 lcore 3: 148077 00:04:59.976 done. 00:04:59.976 00:04:59.976 real 0m1.395s 00:04:59.976 user 0m4.245s 00:04:59.976 sys 0m0.134s 00:04:59.976 19:59:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.976 19:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.976 ************************************ 00:04:59.976 END TEST event_perf 00:04:59.976 ************************************ 00:04:59.976 19:59:57 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:59.976 19:59:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:59.976 19:59:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.976 19:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.976 ************************************ 00:04:59.976 START TEST event_reactor 00:04:59.976 ************************************ 00:04:59.976 19:59:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:59.976 [2024-04-25 19:59:57.628049] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:59.976 [2024-04-25 19:59:57.628175] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320644 ] 00:04:59.976 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.976 [2024-04-25 19:59:57.744654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.976 [2024-04-25 19:59:57.838262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.360 test_start 00:05:01.360 oneshot 00:05:01.360 tick 100 00:05:01.360 tick 100 00:05:01.360 tick 250 00:05:01.360 tick 100 00:05:01.360 tick 100 00:05:01.360 tick 100 00:05:01.360 tick 250 00:05:01.360 tick 500 00:05:01.360 tick 100 00:05:01.360 tick 100 00:05:01.360 tick 250 00:05:01.360 tick 100 00:05:01.360 tick 100 00:05:01.360 test_end 00:05:01.360 00:05:01.360 real 0m1.401s 00:05:01.360 user 0m1.259s 00:05:01.360 sys 0m0.133s 00:05:01.360 19:59:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.360 19:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:01.360 ************************************ 00:05:01.360 END TEST event_reactor 00:05:01.360 ************************************ 00:05:01.360 19:59:59 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.360 19:59:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:01.360 19:59:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.360 19:59:59 -- common/autotest_common.sh@10 -- # set +x 00:05:01.360 ************************************ 00:05:01.360 START TEST event_reactor_perf 00:05:01.360 ************************************ 00:05:01.360 19:59:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.360 [2024-04-25 19:59:59.068042] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:01.360 [2024-04-25 19:59:59.068167] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320957 ] 00:05:01.360 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.360 [2024-04-25 19:59:59.185707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.360 [2024-04-25 19:59:59.280963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.745 test_start 00:05:02.745 test_end 00:05:02.745 Performance: 398886 events per second 00:05:02.745 00:05:02.745 real 0m1.398s 00:05:02.745 user 0m1.254s 00:05:02.745 sys 0m0.135s 00:05:02.745 20:00:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.745 20:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.745 ************************************ 00:05:02.745 END TEST event_reactor_perf 00:05:02.745 ************************************ 00:05:02.745 20:00:00 -- event/event.sh@49 -- # uname -s 00:05:02.745 20:00:00 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.745 20:00:00 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:02.745 20:00:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.745 20:00:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.745 20:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.745 ************************************ 00:05:02.745 START TEST event_scheduler 00:05:02.745 ************************************ 00:05:02.745 20:00:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:02.745 * Looking for test storage... 00:05:02.745 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:05:02.745 20:00:00 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.745 20:00:00 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1321351 00:05:02.745 20:00:00 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.745 20:00:00 -- scheduler/scheduler.sh@37 -- # waitforlisten 1321351 00:05:02.745 20:00:00 -- common/autotest_common.sh@819 -- # '[' -z 1321351 ']' 00:05:02.745 20:00:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.745 20:00:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:02.745 20:00:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.745 20:00:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:02.745 20:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.745 20:00:00 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.745 [2024-04-25 20:00:00.639157] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:02.745 [2024-04-25 20:00:00.639302] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321351 ] 00:05:03.005 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.005 [2024-04-25 20:00:00.768015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.005 [2024-04-25 20:00:00.871716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.005 [2024-04-25 20:00:00.871869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.005 [2024-04-25 20:00:00.871964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.005 [2024-04-25 20:00:00.871974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.572 20:00:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:03.572 20:00:01 -- common/autotest_common.sh@852 -- # return 0 00:05:03.572 20:00:01 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.572 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.572 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.572 POWER: Env isn't set yet! 00:05:03.572 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:03.572 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.572 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.572 POWER: Attempting to initialise PSTAT power management... 00:05:03.572 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:03.572 POWER: Initialized successfully for lcore 0 power management 00:05:03.572 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:03.572 POWER: Initialized successfully for lcore 1 power management 00:05:03.572 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:03.572 POWER: Initialized successfully for lcore 2 power management 00:05:03.572 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:03.572 POWER: Initialized successfully for lcore 3 power management 00:05:03.572 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.572 20:00:01 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.572 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.572 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 [2024-04-25 20:00:01.589515] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.857 20:00:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.857 20:00:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 ************************************ 00:05:03.857 START TEST scheduler_create_thread 00:05:03.857 ************************************ 00:05:03.857 20:00:01 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 2 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 3 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 4 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 5 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 6 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 7 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 8 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 9 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 10 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.857 20:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:03.857 20:00:01 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:03.857 20:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:03.857 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:04.465 20:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:04.465 20:00:02 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:04.465 20:00:02 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:04.465 20:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:04.465 20:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:05.847 20:00:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.847 00:05:05.847 real 0m1.755s 00:05:05.847 user 0m0.011s 00:05:05.847 sys 0m0.007s 00:05:05.847 20:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.847 20:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.847 ************************************ 00:05:05.847 END TEST scheduler_create_thread 00:05:05.847 ************************************ 00:05:05.847 20:00:03 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:05.847 20:00:03 -- scheduler/scheduler.sh@46 -- # killprocess 1321351 00:05:05.847 20:00:03 -- common/autotest_common.sh@926 -- # '[' -z 1321351 ']' 00:05:05.847 20:00:03 -- common/autotest_common.sh@930 -- # kill -0 1321351 00:05:05.847 20:00:03 -- common/autotest_common.sh@931 -- # uname 00:05:05.847 20:00:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:05.847 20:00:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1321351 00:05:05.847 20:00:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:05.847 20:00:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:05.847 20:00:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1321351' 00:05:05.847 killing process with pid 1321351 00:05:05.847 20:00:03 -- common/autotest_common.sh@945 -- # kill 1321351 00:05:05.847 20:00:03 -- common/autotest_common.sh@950 -- # wait 1321351 00:05:06.108 [2024-04-25 20:00:03.830376] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:06.368 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:06.368 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:06.368 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:06.368 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:06.368 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:06.368 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:06.368 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:06.368 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:06.368 00:05:06.368 real 0m3.815s 00:05:06.368 user 0m6.083s 00:05:06.368 sys 0m0.435s 00:05:06.368 20:00:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.368 20:00:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.368 ************************************ 00:05:06.368 END TEST event_scheduler 00:05:06.368 ************************************ 00:05:06.628 20:00:04 -- event/event.sh@51 -- # modprobe -n nbd 00:05:06.628 20:00:04 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:06.628 20:00:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.628 20:00:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.628 20:00:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.628 ************************************ 00:05:06.628 START TEST app_repeat 00:05:06.628 ************************************ 00:05:06.628 20:00:04 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:06.628 20:00:04 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.628 20:00:04 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.628 20:00:04 -- event/event.sh@13 -- # local nbd_list 00:05:06.628 20:00:04 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.628 20:00:04 -- event/event.sh@14 -- # local bdev_list 00:05:06.628 20:00:04 -- event/event.sh@15 -- # local repeat_times=4 00:05:06.628 20:00:04 -- event/event.sh@17 -- # modprobe nbd 00:05:06.628 20:00:04 -- event/event.sh@19 -- # repeat_pid=1322196 00:05:06.628 20:00:04 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.628 20:00:04 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1322196' 00:05:06.628 Process app_repeat pid: 1322196 00:05:06.628 20:00:04 -- event/event.sh@23 -- # for i in {0..2} 00:05:06.628 20:00:04 -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:06.628 20:00:04 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:06.628 spdk_app_start Round 0 00:05:06.628 20:00:04 -- event/event.sh@25 -- # waitforlisten 1322196 /var/tmp/spdk-nbd.sock 00:05:06.628 20:00:04 -- common/autotest_common.sh@819 -- # '[' -z 1322196 ']' 00:05:06.628 20:00:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.628 20:00:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:06.628 20:00:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.628 20:00:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:06.628 20:00:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.628 [2024-04-25 20:00:04.399762] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:06.628 [2024-04-25 20:00:04.399902] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322196 ] 00:05:06.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.628 [2024-04-25 20:00:04.537472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.889 [2024-04-25 20:00:04.635518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.889 [2024-04-25 20:00:04.635521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.460 20:00:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:07.460 20:00:05 -- common/autotest_common.sh@852 -- # return 0 00:05:07.460 20:00:05 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.460 Malloc0 00:05:07.460 20:00:05 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.719 Malloc1 00:05:07.719 20:00:05 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@12 -- # local i 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.719 20:00:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.978 /dev/nbd0 00:05:07.978 20:00:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.978 20:00:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.978 20:00:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:07.978 20:00:05 -- common/autotest_common.sh@857 -- # local i 00:05:07.978 20:00:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:07.978 20:00:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:07.978 20:00:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:07.979 20:00:05 -- common/autotest_common.sh@861 -- # break 00:05:07.979 20:00:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:07.979 20:00:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:07.979 20:00:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.979 1+0 records in 00:05:07.979 1+0 records out 00:05:07.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261312 s, 15.7 MB/s 00:05:07.979 20:00:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:07.979 20:00:05 -- common/autotest_common.sh@874 -- # size=4096 00:05:07.979 20:00:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:07.979 20:00:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:07.979 20:00:05 -- common/autotest_common.sh@877 -- # return 0 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.979 /dev/nbd1 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.979 20:00:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:07.979 20:00:05 -- common/autotest_common.sh@857 -- # local i 00:05:07.979 20:00:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:07.979 20:00:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:07.979 20:00:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:07.979 20:00:05 -- common/autotest_common.sh@861 -- # break 00:05:07.979 20:00:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:07.979 20:00:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:07.979 20:00:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.979 1+0 records in 00:05:07.979 1+0 records out 00:05:07.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266183 s, 15.4 MB/s 00:05:07.979 20:00:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:07.979 20:00:05 -- common/autotest_common.sh@874 -- # size=4096 00:05:07.979 20:00:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:07.979 20:00:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:07.979 20:00:05 -- common/autotest_common.sh@877 -- # return 0 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.979 20:00:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.247 20:00:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.247 { 00:05:08.247 "nbd_device": "/dev/nbd0", 00:05:08.247 "bdev_name": "Malloc0" 00:05:08.247 }, 00:05:08.247 { 00:05:08.247 "nbd_device": "/dev/nbd1", 00:05:08.247 "bdev_name": "Malloc1" 00:05:08.247 } 00:05:08.247 ]' 00:05:08.247 20:00:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.247 { 00:05:08.247 "nbd_device": "/dev/nbd0", 00:05:08.247 "bdev_name": "Malloc0" 00:05:08.247 }, 00:05:08.247 { 00:05:08.247 "nbd_device": "/dev/nbd1", 00:05:08.247 "bdev_name": "Malloc1" 00:05:08.247 } 00:05:08.247 ]' 00:05:08.247 20:00:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.247 20:00:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.247 /dev/nbd1' 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.248 /dev/nbd1' 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.248 256+0 records in 00:05:08.248 256+0 records out 00:05:08.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501424 s, 209 MB/s 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.248 256+0 records in 00:05:08.248 256+0 records out 00:05:08.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151033 s, 69.4 MB/s 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.248 256+0 records in 00:05:08.248 256+0 records out 00:05:08.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161715 s, 64.8 MB/s 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@51 -- # local i 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.248 20:00:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@41 -- # break 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.513 20:00:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@41 -- # break 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@65 -- # true 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.773 20:00:06 -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.773 20:00:06 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.033 20:00:06 -- event/event.sh@35 -- # sleep 3 00:05:09.601 [2024-04-25 20:00:07.356882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.601 [2024-04-25 20:00:07.446700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.601 [2024-04-25 20:00:07.446704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.601 [2024-04-25 20:00:07.525977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.601 [2024-04-25 20:00:07.526028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.175 20:00:09 -- event/event.sh@23 -- # for i in {0..2} 00:05:12.175 20:00:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:12.175 spdk_app_start Round 1 00:05:12.175 20:00:09 -- event/event.sh@25 -- # waitforlisten 1322196 /var/tmp/spdk-nbd.sock 00:05:12.175 20:00:09 -- common/autotest_common.sh@819 -- # '[' -z 1322196 ']' 00:05:12.175 20:00:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.175 20:00:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.175 20:00:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.175 20:00:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.175 20:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:12.175 20:00:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:12.175 20:00:10 -- common/autotest_common.sh@852 -- # return 0 00:05:12.175 20:00:10 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.434 Malloc0 00:05:12.434 20:00:10 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.434 Malloc1 00:05:12.434 20:00:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@12 -- # local i 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.434 20:00:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.693 /dev/nbd0 00:05:12.693 20:00:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.693 20:00:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.693 20:00:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:12.693 20:00:10 -- common/autotest_common.sh@857 -- # local i 00:05:12.693 20:00:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:12.693 20:00:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:12.693 20:00:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:12.693 20:00:10 -- common/autotest_common.sh@861 -- # break 00:05:12.693 20:00:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:12.693 20:00:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:12.693 20:00:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.693 1+0 records in 00:05:12.693 1+0 records out 00:05:12.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000134791 s, 30.4 MB/s 00:05:12.693 20:00:10 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:12.693 20:00:10 -- common/autotest_common.sh@874 -- # size=4096 00:05:12.693 20:00:10 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:12.693 20:00:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:12.693 20:00:10 -- common/autotest_common.sh@877 -- # return 0 00:05:12.693 20:00:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.693 20:00:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.693 20:00:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.953 /dev/nbd1 00:05:12.953 20:00:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.953 20:00:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.953 20:00:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:12.953 20:00:10 -- common/autotest_common.sh@857 -- # local i 00:05:12.953 20:00:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:12.953 20:00:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:12.953 20:00:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:12.953 20:00:10 -- common/autotest_common.sh@861 -- # break 00:05:12.953 20:00:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:12.954 20:00:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:12.954 20:00:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.954 1+0 records in 00:05:12.954 1+0 records out 00:05:12.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143734 s, 28.5 MB/s 00:05:12.954 20:00:10 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:12.954 20:00:10 -- common/autotest_common.sh@874 -- # size=4096 00:05:12.954 20:00:10 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:12.954 20:00:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:12.954 20:00:10 -- common/autotest_common.sh@877 -- # return 0 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.954 { 00:05:12.954 "nbd_device": "/dev/nbd0", 00:05:12.954 "bdev_name": "Malloc0" 00:05:12.954 }, 00:05:12.954 { 00:05:12.954 "nbd_device": "/dev/nbd1", 00:05:12.954 "bdev_name": "Malloc1" 00:05:12.954 } 00:05:12.954 ]' 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.954 { 00:05:12.954 "nbd_device": "/dev/nbd0", 00:05:12.954 "bdev_name": "Malloc0" 00:05:12.954 }, 00:05:12.954 { 00:05:12.954 "nbd_device": "/dev/nbd1", 00:05:12.954 "bdev_name": "Malloc1" 00:05:12.954 } 00:05:12.954 ]' 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.954 /dev/nbd1' 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.954 /dev/nbd1' 00:05:12.954 20:00:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.214 256+0 records in 00:05:13.214 256+0 records out 00:05:13.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450521 s, 233 MB/s 00:05:13.214 20:00:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.215 256+0 records in 00:05:13.215 256+0 records out 00:05:13.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207824 s, 50.5 MB/s 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.215 256+0 records in 00:05:13.215 256+0 records out 00:05:13.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017114 s, 61.3 MB/s 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@51 -- # local i 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.215 20:00:10 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@41 -- # break 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.215 20:00:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@41 -- # break 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.475 20:00:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.736 20:00:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.736 20:00:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.736 20:00:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.736 20:00:11 -- bdev/nbd_common.sh@65 -- # true 00:05:13.736 20:00:11 -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.736 20:00:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.736 20:00:11 -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.736 20:00:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.736 20:00:11 -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.736 20:00:11 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.736 20:00:11 -- event/event.sh@35 -- # sleep 3 00:05:14.306 [2024-04-25 20:00:12.154984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.566 [2024-04-25 20:00:12.248878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.566 [2024-04-25 20:00:12.248883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.566 [2024-04-25 20:00:12.333212] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.566 [2024-04-25 20:00:12.333254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.109 20:00:14 -- event/event.sh@23 -- # for i in {0..2} 00:05:17.109 20:00:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:17.109 spdk_app_start Round 2 00:05:17.109 20:00:14 -- event/event.sh@25 -- # waitforlisten 1322196 /var/tmp/spdk-nbd.sock 00:05:17.109 20:00:14 -- common/autotest_common.sh@819 -- # '[' -z 1322196 ']' 00:05:17.109 20:00:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.109 20:00:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.109 20:00:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.109 20:00:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.109 20:00:14 -- common/autotest_common.sh@10 -- # set +x 00:05:17.109 20:00:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.109 20:00:14 -- common/autotest_common.sh@852 -- # return 0 00:05:17.109 20:00:14 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.109 Malloc0 00:05:17.109 20:00:14 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.368 Malloc1 00:05:17.368 20:00:15 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@12 -- # local i 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.368 /dev/nbd0 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.368 20:00:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.368 20:00:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:17.368 20:00:15 -- common/autotest_common.sh@857 -- # local i 00:05:17.368 20:00:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:17.368 20:00:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:17.368 20:00:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:17.368 20:00:15 -- common/autotest_common.sh@861 -- # break 00:05:17.368 20:00:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:17.368 20:00:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:17.368 20:00:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.629 1+0 records in 00:05:17.629 1+0 records out 00:05:17.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268804 s, 15.2 MB/s 00:05:17.629 20:00:15 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:17.629 20:00:15 -- common/autotest_common.sh@874 -- # size=4096 00:05:17.629 20:00:15 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:17.629 20:00:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:17.629 20:00:15 -- common/autotest_common.sh@877 -- # return 0 00:05:17.629 20:00:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.629 20:00:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.629 20:00:15 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.629 /dev/nbd1 00:05:17.630 20:00:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.630 20:00:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.630 20:00:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:17.630 20:00:15 -- common/autotest_common.sh@857 -- # local i 00:05:17.630 20:00:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:17.630 20:00:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:17.630 20:00:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:17.630 20:00:15 -- common/autotest_common.sh@861 -- # break 00:05:17.630 20:00:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:17.630 20:00:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:17.630 20:00:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.630 1+0 records in 00:05:17.630 1+0 records out 00:05:17.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228154 s, 18.0 MB/s 00:05:17.630 20:00:15 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:17.630 20:00:15 -- common/autotest_common.sh@874 -- # size=4096 00:05:17.630 20:00:15 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:17.630 20:00:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:17.630 20:00:15 -- common/autotest_common.sh@877 -- # return 0 00:05:17.630 20:00:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.630 20:00:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.630 20:00:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.630 20:00:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.630 20:00:15 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.890 20:00:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.890 { 00:05:17.890 "nbd_device": "/dev/nbd0", 00:05:17.890 "bdev_name": "Malloc0" 00:05:17.890 }, 00:05:17.890 { 00:05:17.890 "nbd_device": "/dev/nbd1", 00:05:17.890 "bdev_name": "Malloc1" 00:05:17.890 } 00:05:17.890 ]' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.891 { 00:05:17.891 "nbd_device": "/dev/nbd0", 00:05:17.891 "bdev_name": "Malloc0" 00:05:17.891 }, 00:05:17.891 { 00:05:17.891 "nbd_device": "/dev/nbd1", 00:05:17.891 "bdev_name": "Malloc1" 00:05:17.891 } 00:05:17.891 ]' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.891 /dev/nbd1' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.891 /dev/nbd1' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.891 256+0 records in 00:05:17.891 256+0 records out 00:05:17.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483982 s, 217 MB/s 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.891 256+0 records in 00:05:17.891 256+0 records out 00:05:17.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150885 s, 69.5 MB/s 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.891 256+0 records in 00:05:17.891 256+0 records out 00:05:17.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161335 s, 65.0 MB/s 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@51 -- # local i 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.891 20:00:15 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@41 -- # break 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.151 20:00:15 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.151 20:00:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.151 20:00:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.151 20:00:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.151 20:00:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.151 20:00:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.151 20:00:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.151 20:00:16 -- bdev/nbd_common.sh@41 -- # break 00:05:18.151 20:00:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@65 -- # true 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.411 20:00:16 -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.411 20:00:16 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.672 20:00:16 -- event/event.sh@35 -- # sleep 3 00:05:19.262 [2024-04-25 20:00:16.987997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.262 [2024-04-25 20:00:17.080685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.262 [2024-04-25 20:00:17.080692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.262 [2024-04-25 20:00:17.163869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.262 [2024-04-25 20:00:17.163931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.807 20:00:19 -- event/event.sh@38 -- # waitforlisten 1322196 /var/tmp/spdk-nbd.sock 00:05:21.807 20:00:19 -- common/autotest_common.sh@819 -- # '[' -z 1322196 ']' 00:05:21.807 20:00:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.807 20:00:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.807 20:00:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.807 20:00:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.807 20:00:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.807 20:00:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.807 20:00:19 -- common/autotest_common.sh@852 -- # return 0 00:05:21.807 20:00:19 -- event/event.sh@39 -- # killprocess 1322196 00:05:21.807 20:00:19 -- common/autotest_common.sh@926 -- # '[' -z 1322196 ']' 00:05:21.807 20:00:19 -- common/autotest_common.sh@930 -- # kill -0 1322196 00:05:21.807 20:00:19 -- common/autotest_common.sh@931 -- # uname 00:05:21.807 20:00:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:21.807 20:00:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1322196 00:05:21.807 20:00:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:21.807 20:00:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:21.807 20:00:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1322196' 00:05:21.807 killing process with pid 1322196 00:05:21.807 20:00:19 -- common/autotest_common.sh@945 -- # kill 1322196 00:05:21.807 20:00:19 -- common/autotest_common.sh@950 -- # wait 1322196 00:05:22.376 spdk_app_start is called in Round 0. 00:05:22.376 Shutdown signal received, stop current app iteration 00:05:22.376 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:22.376 spdk_app_start is called in Round 1. 00:05:22.376 Shutdown signal received, stop current app iteration 00:05:22.376 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:22.376 spdk_app_start is called in Round 2. 00:05:22.376 Shutdown signal received, stop current app iteration 00:05:22.376 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:22.376 spdk_app_start is called in Round 3. 00:05:22.376 Shutdown signal received, stop current app iteration 00:05:22.376 20:00:20 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:22.376 20:00:20 -- event/event.sh@42 -- # return 0 00:05:22.376 00:05:22.376 real 0m15.785s 00:05:22.376 user 0m32.968s 00:05:22.376 sys 0m2.100s 00:05:22.376 20:00:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.376 20:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.376 ************************************ 00:05:22.376 END TEST app_repeat 00:05:22.376 ************************************ 00:05:22.376 20:00:20 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:22.376 20:00:20 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:22.376 20:00:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.376 20:00:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.376 20:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.376 ************************************ 00:05:22.376 START TEST cpu_locks 00:05:22.376 ************************************ 00:05:22.376 20:00:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:22.376 * Looking for test storage... 00:05:22.376 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:05:22.376 20:00:20 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:22.376 20:00:20 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:22.376 20:00:20 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:22.376 20:00:20 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:22.376 20:00:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.376 20:00:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.376 20:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.376 ************************************ 00:05:22.376 START TEST default_locks 00:05:22.376 ************************************ 00:05:22.376 20:00:20 -- common/autotest_common.sh@1104 -- # default_locks 00:05:22.376 20:00:20 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1326032 00:05:22.376 20:00:20 -- event/cpu_locks.sh@47 -- # waitforlisten 1326032 00:05:22.376 20:00:20 -- common/autotest_common.sh@819 -- # '[' -z 1326032 ']' 00:05:22.376 20:00:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.376 20:00:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.376 20:00:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.376 20:00:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.376 20:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.376 20:00:20 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.376 [2024-04-25 20:00:20.307231] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:22.376 [2024-04-25 20:00:20.307326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326032 ] 00:05:22.636 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.636 [2024-04-25 20:00:20.400039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.636 [2024-04-25 20:00:20.496258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.636 [2024-04-25 20:00:20.496463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.213 20:00:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.213 20:00:21 -- common/autotest_common.sh@852 -- # return 0 00:05:23.213 20:00:21 -- event/cpu_locks.sh@49 -- # locks_exist 1326032 00:05:23.213 20:00:21 -- event/cpu_locks.sh@22 -- # lslocks -p 1326032 00:05:23.213 20:00:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.474 lslocks: write error 00:05:23.474 20:00:21 -- event/cpu_locks.sh@50 -- # killprocess 1326032 00:05:23.474 20:00:21 -- common/autotest_common.sh@926 -- # '[' -z 1326032 ']' 00:05:23.474 20:00:21 -- common/autotest_common.sh@930 -- # kill -0 1326032 00:05:23.474 20:00:21 -- common/autotest_common.sh@931 -- # uname 00:05:23.474 20:00:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:23.474 20:00:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1326032 00:05:23.474 20:00:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:23.474 20:00:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:23.474 20:00:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1326032' 00:05:23.474 killing process with pid 1326032 00:05:23.474 20:00:21 -- common/autotest_common.sh@945 -- # kill 1326032 00:05:23.474 20:00:21 -- common/autotest_common.sh@950 -- # wait 1326032 00:05:24.473 20:00:22 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1326032 00:05:24.473 20:00:22 -- common/autotest_common.sh@640 -- # local es=0 00:05:24.473 20:00:22 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1326032 00:05:24.473 20:00:22 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:24.473 20:00:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:24.473 20:00:22 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:24.473 20:00:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:24.473 20:00:22 -- common/autotest_common.sh@643 -- # waitforlisten 1326032 00:05:24.473 20:00:22 -- common/autotest_common.sh@819 -- # '[' -z 1326032 ']' 00:05:24.473 20:00:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.473 20:00:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.473 20:00:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.473 20:00:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.473 20:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.473 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1326032) - No such process 00:05:24.473 ERROR: process (pid: 1326032) is no longer running 00:05:24.473 20:00:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:24.473 20:00:22 -- common/autotest_common.sh@852 -- # return 1 00:05:24.473 20:00:22 -- common/autotest_common.sh@643 -- # es=1 00:05:24.473 20:00:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:24.473 20:00:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:24.473 20:00:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:24.473 20:00:22 -- event/cpu_locks.sh@54 -- # no_locks 00:05:24.473 20:00:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.473 20:00:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.473 20:00:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.473 00:05:24.473 real 0m1.889s 00:05:24.473 user 0m1.844s 00:05:24.473 sys 0m0.503s 00:05:24.473 20:00:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.473 20:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.473 ************************************ 00:05:24.473 END TEST default_locks 00:05:24.473 ************************************ 00:05:24.473 20:00:22 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:24.473 20:00:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.473 20:00:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.473 20:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.473 ************************************ 00:05:24.473 START TEST default_locks_via_rpc 00:05:24.473 ************************************ 00:05:24.473 20:00:22 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:24.473 20:00:22 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1326374 00:05:24.473 20:00:22 -- event/cpu_locks.sh@63 -- # waitforlisten 1326374 00:05:24.473 20:00:22 -- common/autotest_common.sh@819 -- # '[' -z 1326374 ']' 00:05:24.473 20:00:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.473 20:00:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.473 20:00:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.473 20:00:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.473 20:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.473 20:00:22 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.473 [2024-04-25 20:00:22.261041] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:24.473 [2024-04-25 20:00:22.261180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326374 ] 00:05:24.473 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.473 [2024-04-25 20:00:22.390295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.734 [2024-04-25 20:00:22.484843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.734 [2024-04-25 20:00:22.485068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.305 20:00:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.305 20:00:22 -- common/autotest_common.sh@852 -- # return 0 00:05:25.305 20:00:22 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:25.305 20:00:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.305 20:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.305 20:00:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.305 20:00:22 -- event/cpu_locks.sh@67 -- # no_locks 00:05:25.305 20:00:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:25.305 20:00:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:25.305 20:00:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:25.305 20:00:22 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.305 20:00:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.305 20:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.305 20:00:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.305 20:00:22 -- event/cpu_locks.sh@71 -- # locks_exist 1326374 00:05:25.305 20:00:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.305 20:00:22 -- event/cpu_locks.sh@22 -- # lslocks -p 1326374 00:05:25.305 20:00:23 -- event/cpu_locks.sh@73 -- # killprocess 1326374 00:05:25.305 20:00:23 -- common/autotest_common.sh@926 -- # '[' -z 1326374 ']' 00:05:25.305 20:00:23 -- common/autotest_common.sh@930 -- # kill -0 1326374 00:05:25.306 20:00:23 -- common/autotest_common.sh@931 -- # uname 00:05:25.306 20:00:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:25.306 20:00:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1326374 00:05:25.306 20:00:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:25.306 20:00:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:25.306 20:00:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1326374' 00:05:25.306 killing process with pid 1326374 00:05:25.306 20:00:23 -- common/autotest_common.sh@945 -- # kill 1326374 00:05:25.306 20:00:23 -- common/autotest_common.sh@950 -- # wait 1326374 00:05:26.248 00:05:26.248 real 0m1.940s 00:05:26.248 user 0m1.855s 00:05:26.248 sys 0m0.534s 00:05:26.248 20:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.248 20:00:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.248 ************************************ 00:05:26.248 END TEST default_locks_via_rpc 00:05:26.248 ************************************ 00:05:26.248 20:00:24 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:26.248 20:00:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.248 20:00:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.248 20:00:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.248 ************************************ 00:05:26.248 START TEST non_locking_app_on_locked_coremask 00:05:26.248 ************************************ 00:05:26.248 20:00:24 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:26.248 20:00:24 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1326707 00:05:26.248 20:00:24 -- event/cpu_locks.sh@81 -- # waitforlisten 1326707 /var/tmp/spdk.sock 00:05:26.248 20:00:24 -- common/autotest_common.sh@819 -- # '[' -z 1326707 ']' 00:05:26.248 20:00:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.248 20:00:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.248 20:00:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.248 20:00:24 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.248 20:00:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.248 20:00:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.508 [2024-04-25 20:00:24.247200] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:26.508 [2024-04-25 20:00:24.247348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326707 ] 00:05:26.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.508 [2024-04-25 20:00:24.380123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.767 [2024-04-25 20:00:24.480313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.767 [2024-04-25 20:00:24.480540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.027 20:00:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:27.027 20:00:24 -- common/autotest_common.sh@852 -- # return 0 00:05:27.027 20:00:24 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1327001 00:05:27.027 20:00:24 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:27.027 20:00:24 -- event/cpu_locks.sh@85 -- # waitforlisten 1327001 /var/tmp/spdk2.sock 00:05:27.027 20:00:24 -- common/autotest_common.sh@819 -- # '[' -z 1327001 ']' 00:05:27.027 20:00:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.027 20:00:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:27.027 20:00:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.027 20:00:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:27.027 20:00:24 -- common/autotest_common.sh@10 -- # set +x 00:05:27.287 [2024-04-25 20:00:25.038529] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:27.287 [2024-04-25 20:00:25.038645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327001 ] 00:05:27.287 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.287 [2024-04-25 20:00:25.189467] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.287 [2024-04-25 20:00:25.189513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.548 [2024-04-25 20:00:25.380331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.548 [2024-04-25 20:00:25.380537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.487 20:00:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.487 20:00:26 -- common/autotest_common.sh@852 -- # return 0 00:05:28.487 20:00:26 -- event/cpu_locks.sh@87 -- # locks_exist 1326707 00:05:28.487 20:00:26 -- event/cpu_locks.sh@22 -- # lslocks -p 1326707 00:05:28.487 20:00:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.058 lslocks: write error 00:05:29.058 20:00:26 -- event/cpu_locks.sh@89 -- # killprocess 1326707 00:05:29.058 20:00:26 -- common/autotest_common.sh@926 -- # '[' -z 1326707 ']' 00:05:29.058 20:00:26 -- common/autotest_common.sh@930 -- # kill -0 1326707 00:05:29.058 20:00:26 -- common/autotest_common.sh@931 -- # uname 00:05:29.058 20:00:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:29.059 20:00:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1326707 00:05:29.059 20:00:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:29.059 20:00:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:29.059 20:00:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1326707' 00:05:29.059 killing process with pid 1326707 00:05:29.059 20:00:26 -- common/autotest_common.sh@945 -- # kill 1326707 00:05:29.059 20:00:26 -- common/autotest_common.sh@950 -- # wait 1326707 00:05:30.970 20:00:28 -- event/cpu_locks.sh@90 -- # killprocess 1327001 00:05:30.970 20:00:28 -- common/autotest_common.sh@926 -- # '[' -z 1327001 ']' 00:05:30.970 20:00:28 -- common/autotest_common.sh@930 -- # kill -0 1327001 00:05:30.970 20:00:28 -- common/autotest_common.sh@931 -- # uname 00:05:30.970 20:00:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:30.970 20:00:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1327001 00:05:30.970 20:00:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:30.970 20:00:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:30.970 20:00:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1327001' 00:05:30.970 killing process with pid 1327001 00:05:30.970 20:00:28 -- common/autotest_common.sh@945 -- # kill 1327001 00:05:30.970 20:00:28 -- common/autotest_common.sh@950 -- # wait 1327001 00:05:31.540 00:05:31.540 real 0m5.127s 00:05:31.540 user 0m5.271s 00:05:31.540 sys 0m1.055s 00:05:31.540 20:00:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.540 20:00:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.540 ************************************ 00:05:31.540 END TEST non_locking_app_on_locked_coremask 00:05:31.540 ************************************ 00:05:31.540 20:00:29 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:31.540 20:00:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.540 20:00:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.540 20:00:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.540 ************************************ 00:05:31.540 START TEST locking_app_on_unlocked_coremask 00:05:31.540 ************************************ 00:05:31.540 20:00:29 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:31.540 20:00:29 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1327897 00:05:31.540 20:00:29 -- event/cpu_locks.sh@99 -- # waitforlisten 1327897 /var/tmp/spdk.sock 00:05:31.540 20:00:29 -- common/autotest_common.sh@819 -- # '[' -z 1327897 ']' 00:05:31.540 20:00:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.540 20:00:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.540 20:00:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.540 20:00:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.540 20:00:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.540 20:00:29 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:31.540 [2024-04-25 20:00:29.417901] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:31.540 [2024-04-25 20:00:29.418041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327897 ] 00:05:31.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.801 [2024-04-25 20:00:29.547399] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.801 [2024-04-25 20:00:29.547449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.801 [2024-04-25 20:00:29.639704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.801 [2024-04-25 20:00:29.639911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.372 20:00:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.372 20:00:30 -- common/autotest_common.sh@852 -- # return 0 00:05:32.372 20:00:30 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1327951 00:05:32.372 20:00:30 -- event/cpu_locks.sh@103 -- # waitforlisten 1327951 /var/tmp/spdk2.sock 00:05:32.372 20:00:30 -- common/autotest_common.sh@819 -- # '[' -z 1327951 ']' 00:05:32.372 20:00:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.372 20:00:30 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:32.372 20:00:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.372 20:00:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.372 20:00:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.372 20:00:30 -- common/autotest_common.sh@10 -- # set +x 00:05:32.372 [2024-04-25 20:00:30.217589] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:32.372 [2024-04-25 20:00:30.217727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327951 ] 00:05:32.372 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.684 [2024-04-25 20:00:30.385286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.684 [2024-04-25 20:00:30.570971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.684 [2024-04-25 20:00:30.571176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.064 20:00:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.064 20:00:31 -- common/autotest_common.sh@852 -- # return 0 00:05:34.064 20:00:31 -- event/cpu_locks.sh@105 -- # locks_exist 1327951 00:05:34.064 20:00:31 -- event/cpu_locks.sh@22 -- # lslocks -p 1327951 00:05:34.064 20:00:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.064 lslocks: write error 00:05:34.064 20:00:31 -- event/cpu_locks.sh@107 -- # killprocess 1327897 00:05:34.064 20:00:31 -- common/autotest_common.sh@926 -- # '[' -z 1327897 ']' 00:05:34.064 20:00:31 -- common/autotest_common.sh@930 -- # kill -0 1327897 00:05:34.064 20:00:31 -- common/autotest_common.sh@931 -- # uname 00:05:34.064 20:00:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:34.064 20:00:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1327897 00:05:34.064 20:00:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:34.064 20:00:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:34.064 20:00:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1327897' 00:05:34.064 killing process with pid 1327897 00:05:34.065 20:00:31 -- common/autotest_common.sh@945 -- # kill 1327897 00:05:34.065 20:00:31 -- common/autotest_common.sh@950 -- # wait 1327897 00:05:35.978 20:00:33 -- event/cpu_locks.sh@108 -- # killprocess 1327951 00:05:35.978 20:00:33 -- common/autotest_common.sh@926 -- # '[' -z 1327951 ']' 00:05:35.978 20:00:33 -- common/autotest_common.sh@930 -- # kill -0 1327951 00:05:35.978 20:00:33 -- common/autotest_common.sh@931 -- # uname 00:05:35.978 20:00:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.978 20:00:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1327951 00:05:35.978 20:00:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.978 20:00:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.978 20:00:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1327951' 00:05:35.978 killing process with pid 1327951 00:05:35.978 20:00:33 -- common/autotest_common.sh@945 -- # kill 1327951 00:05:35.978 20:00:33 -- common/autotest_common.sh@950 -- # wait 1327951 00:05:36.550 00:05:36.550 real 0m5.053s 00:05:36.550 user 0m5.215s 00:05:36.550 sys 0m0.939s 00:05:36.550 20:00:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.550 20:00:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.550 ************************************ 00:05:36.550 END TEST locking_app_on_unlocked_coremask 00:05:36.550 ************************************ 00:05:36.550 20:00:34 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:36.550 20:00:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.550 20:00:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.550 20:00:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.550 ************************************ 00:05:36.550 START TEST locking_app_on_locked_coremask 00:05:36.550 ************************************ 00:05:36.550 20:00:34 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:36.550 20:00:34 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1328878 00:05:36.550 20:00:34 -- event/cpu_locks.sh@116 -- # waitforlisten 1328878 /var/tmp/spdk.sock 00:05:36.550 20:00:34 -- common/autotest_common.sh@819 -- # '[' -z 1328878 ']' 00:05:36.550 20:00:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.550 20:00:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.550 20:00:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.550 20:00:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.550 20:00:34 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.550 20:00:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.811 [2024-04-25 20:00:34.511793] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:36.811 [2024-04-25 20:00:34.511930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328878 ] 00:05:36.811 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.811 [2024-04-25 20:00:34.640697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.811 [2024-04-25 20:00:34.739899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.811 [2024-04-25 20:00:34.740117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.383 20:00:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.383 20:00:35 -- common/autotest_common.sh@852 -- # return 0 00:05:37.383 20:00:35 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1328981 00:05:37.383 20:00:35 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1328981 /var/tmp/spdk2.sock 00:05:37.383 20:00:35 -- common/autotest_common.sh@640 -- # local es=0 00:05:37.383 20:00:35 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1328981 /var/tmp/spdk2.sock 00:05:37.383 20:00:35 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.383 20:00:35 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:37.383 20:00:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:37.383 20:00:35 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:37.383 20:00:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:37.383 20:00:35 -- common/autotest_common.sh@643 -- # waitforlisten 1328981 /var/tmp/spdk2.sock 00:05:37.383 20:00:35 -- common/autotest_common.sh@819 -- # '[' -z 1328981 ']' 00:05:37.383 20:00:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.383 20:00:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.383 20:00:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.383 20:00:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.383 20:00:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.643 [2024-04-25 20:00:35.316971] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:37.643 [2024-04-25 20:00:35.317114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328981 ] 00:05:37.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.643 [2024-04-25 20:00:35.490578] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1328878 has claimed it. 00:05:37.643 [2024-04-25 20:00:35.490636] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.215 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1328981) - No such process 00:05:38.215 ERROR: process (pid: 1328981) is no longer running 00:05:38.215 20:00:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.215 20:00:35 -- common/autotest_common.sh@852 -- # return 1 00:05:38.215 20:00:35 -- common/autotest_common.sh@643 -- # es=1 00:05:38.215 20:00:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:38.215 20:00:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:38.215 20:00:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:38.215 20:00:35 -- event/cpu_locks.sh@122 -- # locks_exist 1328878 00:05:38.215 20:00:35 -- event/cpu_locks.sh@22 -- # lslocks -p 1328878 00:05:38.215 20:00:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.215 lslocks: write error 00:05:38.215 20:00:36 -- event/cpu_locks.sh@124 -- # killprocess 1328878 00:05:38.215 20:00:36 -- common/autotest_common.sh@926 -- # '[' -z 1328878 ']' 00:05:38.215 20:00:36 -- common/autotest_common.sh@930 -- # kill -0 1328878 00:05:38.215 20:00:36 -- common/autotest_common.sh@931 -- # uname 00:05:38.215 20:00:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:38.215 20:00:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1328878 00:05:38.215 20:00:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.215 20:00:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.215 20:00:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1328878' 00:05:38.215 killing process with pid 1328878 00:05:38.215 20:00:36 -- common/autotest_common.sh@945 -- # kill 1328878 00:05:38.215 20:00:36 -- common/autotest_common.sh@950 -- # wait 1328878 00:05:39.158 00:05:39.158 real 0m2.545s 00:05:39.158 user 0m2.586s 00:05:39.158 sys 0m0.761s 00:05:39.158 20:00:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.158 20:00:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.158 ************************************ 00:05:39.158 END TEST locking_app_on_locked_coremask 00:05:39.158 ************************************ 00:05:39.158 20:00:36 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:39.158 20:00:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.158 20:00:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.158 20:00:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.158 ************************************ 00:05:39.158 START TEST locking_overlapped_coremask 00:05:39.158 ************************************ 00:05:39.158 20:00:36 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:39.158 20:00:36 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1329451 00:05:39.158 20:00:36 -- event/cpu_locks.sh@133 -- # waitforlisten 1329451 /var/tmp/spdk.sock 00:05:39.158 20:00:37 -- common/autotest_common.sh@819 -- # '[' -z 1329451 ']' 00:05:39.158 20:00:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.158 20:00:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.158 20:00:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.158 20:00:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.158 20:00:37 -- common/autotest_common.sh@10 -- # set +x 00:05:39.158 20:00:37 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:39.419 [2024-04-25 20:00:37.101713] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:39.419 [2024-04-25 20:00:37.101862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329451 ] 00:05:39.419 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.419 [2024-04-25 20:00:37.234049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.419 [2024-04-25 20:00:37.333673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.419 [2024-04-25 20:00:37.333923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.419 [2024-04-25 20:00:37.334020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.419 [2024-04-25 20:00:37.334026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.990 20:00:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.990 20:00:37 -- common/autotest_common.sh@852 -- # return 0 00:05:39.990 20:00:37 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1329523 00:05:39.990 20:00:37 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1329523 /var/tmp/spdk2.sock 00:05:39.990 20:00:37 -- common/autotest_common.sh@640 -- # local es=0 00:05:39.991 20:00:37 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1329523 /var/tmp/spdk2.sock 00:05:39.991 20:00:37 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:39.991 20:00:37 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:39.991 20:00:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:39.991 20:00:37 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:39.991 20:00:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:39.991 20:00:37 -- common/autotest_common.sh@643 -- # waitforlisten 1329523 /var/tmp/spdk2.sock 00:05:39.991 20:00:37 -- common/autotest_common.sh@819 -- # '[' -z 1329523 ']' 00:05:39.991 20:00:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.991 20:00:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.991 20:00:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.991 20:00:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.991 20:00:37 -- common/autotest_common.sh@10 -- # set +x 00:05:39.991 [2024-04-25 20:00:37.899253] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:39.991 [2024-04-25 20:00:37.899396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329523 ] 00:05:40.251 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.251 [2024-04-25 20:00:38.069220] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1329451 has claimed it. 00:05:40.251 [2024-04-25 20:00:38.069269] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.823 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1329523) - No such process 00:05:40.823 ERROR: process (pid: 1329523) is no longer running 00:05:40.823 20:00:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.823 20:00:38 -- common/autotest_common.sh@852 -- # return 1 00:05:40.823 20:00:38 -- common/autotest_common.sh@643 -- # es=1 00:05:40.823 20:00:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:40.823 20:00:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:40.823 20:00:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:40.823 20:00:38 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:40.823 20:00:38 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.823 20:00:38 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.823 20:00:38 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.823 20:00:38 -- event/cpu_locks.sh@141 -- # killprocess 1329451 00:05:40.823 20:00:38 -- common/autotest_common.sh@926 -- # '[' -z 1329451 ']' 00:05:40.823 20:00:38 -- common/autotest_common.sh@930 -- # kill -0 1329451 00:05:40.823 20:00:38 -- common/autotest_common.sh@931 -- # uname 00:05:40.823 20:00:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:40.823 20:00:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1329451 00:05:40.823 20:00:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:40.823 20:00:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:40.823 20:00:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1329451' 00:05:40.823 killing process with pid 1329451 00:05:40.823 20:00:38 -- common/autotest_common.sh@945 -- # kill 1329451 00:05:40.823 20:00:38 -- common/autotest_common.sh@950 -- # wait 1329451 00:05:41.766 00:05:41.766 real 0m2.365s 00:05:41.766 user 0m6.075s 00:05:41.766 sys 0m0.609s 00:05:41.766 20:00:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.766 20:00:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.766 ************************************ 00:05:41.766 END TEST locking_overlapped_coremask 00:05:41.766 ************************************ 00:05:41.766 20:00:39 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:41.766 20:00:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.766 20:00:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.766 20:00:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.766 ************************************ 00:05:41.766 START TEST locking_overlapped_coremask_via_rpc 00:05:41.766 ************************************ 00:05:41.766 20:00:39 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:41.766 20:00:39 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1329859 00:05:41.766 20:00:39 -- event/cpu_locks.sh@149 -- # waitforlisten 1329859 /var/tmp/spdk.sock 00:05:41.766 20:00:39 -- common/autotest_common.sh@819 -- # '[' -z 1329859 ']' 00:05:41.766 20:00:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.766 20:00:39 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:41.766 20:00:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.766 20:00:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.766 20:00:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.766 20:00:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.766 [2024-04-25 20:00:39.508290] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:41.766 [2024-04-25 20:00:39.508435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329859 ] 00:05:41.766 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.766 [2024-04-25 20:00:39.639651] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.766 [2024-04-25 20:00:39.639704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.027 [2024-04-25 20:00:39.736450] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.027 [2024-04-25 20:00:39.736745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.027 [2024-04-25 20:00:39.736843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.027 [2024-04-25 20:00:39.736850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.288 20:00:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:42.288 20:00:40 -- common/autotest_common.sh@852 -- # return 0 00:05:42.288 20:00:40 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1330153 00:05:42.288 20:00:40 -- event/cpu_locks.sh@153 -- # waitforlisten 1330153 /var/tmp/spdk2.sock 00:05:42.288 20:00:40 -- common/autotest_common.sh@819 -- # '[' -z 1330153 ']' 00:05:42.288 20:00:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.288 20:00:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.288 20:00:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.288 20:00:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.288 20:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.288 20:00:40 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:42.549 [2024-04-25 20:00:40.309220] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:42.549 [2024-04-25 20:00:40.309367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330153 ] 00:05:42.549 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.549 [2024-04-25 20:00:40.479947] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.549 [2024-04-25 20:00:40.479996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.810 [2024-04-25 20:00:40.663969] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.810 [2024-04-25 20:00:40.664218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.810 [2024-04-25 20:00:40.664375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.810 [2024-04-25 20:00:40.664409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:43.793 20:00:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.793 20:00:41 -- common/autotest_common.sh@852 -- # return 0 00:05:43.793 20:00:41 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:43.793 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.793 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.793 20:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.793 20:00:41 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.793 20:00:41 -- common/autotest_common.sh@640 -- # local es=0 00:05:43.793 20:00:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.793 20:00:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:43.793 20:00:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:43.793 20:00:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:43.794 20:00:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:43.794 20:00:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.794 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.794 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.794 [2024-04-25 20:00:41.684617] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1329859 has claimed it. 00:05:43.794 request: 00:05:43.794 { 00:05:43.794 "method": "framework_enable_cpumask_locks", 00:05:43.794 "req_id": 1 00:05:43.794 } 00:05:43.794 Got JSON-RPC error response 00:05:43.794 response: 00:05:43.794 { 00:05:43.794 "code": -32603, 00:05:43.794 "message": "Failed to claim CPU core: 2" 00:05:43.794 } 00:05:43.794 20:00:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:43.794 20:00:41 -- common/autotest_common.sh@643 -- # es=1 00:05:43.794 20:00:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:43.794 20:00:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:43.794 20:00:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:43.794 20:00:41 -- event/cpu_locks.sh@158 -- # waitforlisten 1329859 /var/tmp/spdk.sock 00:05:43.794 20:00:41 -- common/autotest_common.sh@819 -- # '[' -z 1329859 ']' 00:05:43.794 20:00:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.794 20:00:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.794 20:00:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.794 20:00:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.794 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:44.053 20:00:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.053 20:00:41 -- common/autotest_common.sh@852 -- # return 0 00:05:44.053 20:00:41 -- event/cpu_locks.sh@159 -- # waitforlisten 1330153 /var/tmp/spdk2.sock 00:05:44.053 20:00:41 -- common/autotest_common.sh@819 -- # '[' -z 1330153 ']' 00:05:44.053 20:00:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.053 20:00:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.053 20:00:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.054 20:00:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.054 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:44.314 20:00:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.314 20:00:42 -- common/autotest_common.sh@852 -- # return 0 00:05:44.314 20:00:42 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:44.314 20:00:42 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.314 20:00:42 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.314 20:00:42 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.314 00:05:44.314 real 0m2.599s 00:05:44.314 user 0m0.814s 00:05:44.314 sys 0m0.201s 00:05:44.314 20:00:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.314 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.314 ************************************ 00:05:44.314 END TEST locking_overlapped_coremask_via_rpc 00:05:44.314 ************************************ 00:05:44.315 20:00:42 -- event/cpu_locks.sh@174 -- # cleanup 00:05:44.315 20:00:42 -- event/cpu_locks.sh@15 -- # [[ -z 1329859 ]] 00:05:44.315 20:00:42 -- event/cpu_locks.sh@15 -- # killprocess 1329859 00:05:44.315 20:00:42 -- common/autotest_common.sh@926 -- # '[' -z 1329859 ']' 00:05:44.315 20:00:42 -- common/autotest_common.sh@930 -- # kill -0 1329859 00:05:44.315 20:00:42 -- common/autotest_common.sh@931 -- # uname 00:05:44.315 20:00:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.315 20:00:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1329859 00:05:44.315 20:00:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.315 20:00:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.315 20:00:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1329859' 00:05:44.315 killing process with pid 1329859 00:05:44.315 20:00:42 -- common/autotest_common.sh@945 -- # kill 1329859 00:05:44.315 20:00:42 -- common/autotest_common.sh@950 -- # wait 1329859 00:05:45.255 20:00:42 -- event/cpu_locks.sh@16 -- # [[ -z 1330153 ]] 00:05:45.255 20:00:42 -- event/cpu_locks.sh@16 -- # killprocess 1330153 00:05:45.255 20:00:42 -- common/autotest_common.sh@926 -- # '[' -z 1330153 ']' 00:05:45.255 20:00:42 -- common/autotest_common.sh@930 -- # kill -0 1330153 00:05:45.255 20:00:42 -- common/autotest_common.sh@931 -- # uname 00:05:45.255 20:00:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.255 20:00:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1330153 00:05:45.255 20:00:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:45.255 20:00:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:45.255 20:00:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1330153' 00:05:45.255 killing process with pid 1330153 00:05:45.255 20:00:43 -- common/autotest_common.sh@945 -- # kill 1330153 00:05:45.255 20:00:43 -- common/autotest_common.sh@950 -- # wait 1330153 00:05:46.197 20:00:43 -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.197 20:00:43 -- event/cpu_locks.sh@1 -- # cleanup 00:05:46.197 20:00:43 -- event/cpu_locks.sh@15 -- # [[ -z 1329859 ]] 00:05:46.197 20:00:43 -- event/cpu_locks.sh@15 -- # killprocess 1329859 00:05:46.197 20:00:43 -- common/autotest_common.sh@926 -- # '[' -z 1329859 ']' 00:05:46.197 20:00:43 -- common/autotest_common.sh@930 -- # kill -0 1329859 00:05:46.197 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1329859) - No such process 00:05:46.197 20:00:43 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1329859 is not found' 00:05:46.197 Process with pid 1329859 is not found 00:05:46.197 20:00:43 -- event/cpu_locks.sh@16 -- # [[ -z 1330153 ]] 00:05:46.197 20:00:43 -- event/cpu_locks.sh@16 -- # killprocess 1330153 00:05:46.197 20:00:43 -- common/autotest_common.sh@926 -- # '[' -z 1330153 ']' 00:05:46.197 20:00:43 -- common/autotest_common.sh@930 -- # kill -0 1330153 00:05:46.197 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1330153) - No such process 00:05:46.197 20:00:43 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1330153 is not found' 00:05:46.197 Process with pid 1330153 is not found 00:05:46.197 20:00:43 -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.197 00:05:46.197 real 0m23.701s 00:05:46.197 user 0m39.962s 00:05:46.197 sys 0m5.641s 00:05:46.197 20:00:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.197 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.197 ************************************ 00:05:46.197 END TEST cpu_locks 00:05:46.197 ************************************ 00:05:46.197 00:05:46.197 real 0m47.804s 00:05:46.197 user 1m25.874s 00:05:46.197 sys 0m8.826s 00:05:46.197 20:00:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.197 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.197 ************************************ 00:05:46.197 END TEST event 00:05:46.197 ************************************ 00:05:46.197 20:00:43 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:46.197 20:00:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.197 20:00:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.197 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.197 ************************************ 00:05:46.197 START TEST thread 00:05:46.197 ************************************ 00:05:46.197 20:00:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:46.197 * Looking for test storage... 00:05:46.197 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:05:46.197 20:00:43 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.197 20:00:43 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:46.197 20:00:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.197 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.197 ************************************ 00:05:46.197 START TEST thread_poller_perf 00:05:46.197 ************************************ 00:05:46.197 20:00:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.197 [2024-04-25 20:00:44.031780] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:46.197 [2024-04-25 20:00:44.031911] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330866 ] 00:05:46.197 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.458 [2024-04-25 20:00:44.142604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.458 [2024-04-25 20:00:44.237013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.458 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:47.844 ====================================== 00:05:47.844 busy:1906366284 (cyc) 00:05:47.844 total_run_count: 382000 00:05:47.844 tsc_hz: 1900000000 (cyc) 00:05:47.844 ====================================== 00:05:47.844 poller_cost: 4990 (cyc), 2626 (nsec) 00:05:47.844 00:05:47.844 real 0m1.405s 00:05:47.844 user 0m1.272s 00:05:47.844 sys 0m0.125s 00:05:47.844 20:00:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.844 20:00:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.844 ************************************ 00:05:47.844 END TEST thread_poller_perf 00:05:47.844 ************************************ 00:05:47.844 20:00:45 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.844 20:00:45 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:47.844 20:00:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.844 20:00:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.844 ************************************ 00:05:47.844 START TEST thread_poller_perf 00:05:47.844 ************************************ 00:05:47.844 20:00:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.844 [2024-04-25 20:00:45.468511] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:47.844 [2024-04-25 20:00:45.468634] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331180 ] 00:05:47.844 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.844 [2024-04-25 20:00:45.583326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.844 [2024-04-25 20:00:45.678564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.844 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:49.229 ====================================== 00:05:49.229 busy:1902323438 (cyc) 00:05:49.229 total_run_count: 5201000 00:05:49.229 tsc_hz: 1900000000 (cyc) 00:05:49.229 ====================================== 00:05:49.229 poller_cost: 365 (cyc), 192 (nsec) 00:05:49.229 00:05:49.229 real 0m1.410s 00:05:49.229 user 0m1.273s 00:05:49.229 sys 0m0.128s 00:05:49.229 20:00:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.229 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.229 ************************************ 00:05:49.229 END TEST thread_poller_perf 00:05:49.229 ************************************ 00:05:49.229 20:00:46 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:49.229 00:05:49.229 real 0m2.946s 00:05:49.229 user 0m2.590s 00:05:49.229 sys 0m0.355s 00:05:49.229 20:00:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.229 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.229 ************************************ 00:05:49.229 END TEST thread 00:05:49.229 ************************************ 00:05:49.229 20:00:46 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:49.229 20:00:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.229 20:00:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.229 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.229 ************************************ 00:05:49.229 START TEST accel 00:05:49.229 ************************************ 00:05:49.229 20:00:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:49.229 * Looking for test storage... 00:05:49.229 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:05:49.229 20:00:46 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:49.230 20:00:46 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:49.230 20:00:46 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.230 20:00:46 -- accel/accel.sh@59 -- # spdk_tgt_pid=1331535 00:05:49.230 20:00:46 -- accel/accel.sh@60 -- # waitforlisten 1331535 00:05:49.230 20:00:46 -- common/autotest_common.sh@819 -- # '[' -z 1331535 ']' 00:05:49.230 20:00:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.230 20:00:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.230 20:00:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.230 20:00:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.230 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.230 20:00:46 -- accel/accel.sh@58 -- # build_accel_config 00:05:49.230 20:00:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.230 20:00:46 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:49.230 20:00:46 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:49.230 20:00:46 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:49.230 20:00:46 -- accel/accel.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:49.230 20:00:46 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:49.230 20:00:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.230 20:00:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.230 20:00:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.230 20:00:46 -- accel/accel.sh@42 -- # jq -r . 00:05:49.230 [2024-04-25 20:00:47.086664] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:49.230 [2024-04-25 20:00:47.086804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331535 ] 00:05:49.490 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.490 [2024-04-25 20:00:47.217475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.490 [2024-04-25 20:00:47.306497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.490 [2024-04-25 20:00:47.306711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.490 [2024-04-25 20:00:47.311286] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:49.490 [2024-04-25 20:00:47.319235] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:57.630 20:00:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.630 20:00:54 -- common/autotest_common.sh@852 -- # return 0 00:05:57.630 20:00:54 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:57.630 20:00:54 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:57.630 20:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.630 20:00:54 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:57.630 20:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.630 20:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=iaa 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=iaa 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:57.630 20:00:54 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # IFS== 00:05:57.630 20:00:54 -- accel/accel.sh@64 -- # read -r opc module 00:05:57.630 20:00:54 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:57.630 20:00:54 -- accel/accel.sh@67 -- # killprocess 1331535 00:05:57.630 20:00:54 -- common/autotest_common.sh@926 -- # '[' -z 1331535 ']' 00:05:57.630 20:00:54 -- common/autotest_common.sh@930 -- # kill -0 1331535 00:05:57.630 20:00:54 -- common/autotest_common.sh@931 -- # uname 00:05:57.630 20:00:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:57.630 20:00:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1331535 00:05:57.630 20:00:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:57.630 20:00:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:57.630 20:00:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1331535' 00:05:57.630 killing process with pid 1331535 00:05:57.630 20:00:54 -- common/autotest_common.sh@945 -- # kill 1331535 00:05:57.630 20:00:54 -- common/autotest_common.sh@950 -- # wait 1331535 00:06:00.174 20:00:57 -- accel/accel.sh@68 -- # trap - ERR 00:06:00.174 20:00:57 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:00.174 20:00:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:00.174 20:00:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.174 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.174 20:00:57 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:00.174 20:00:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:00.174 20:00:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.174 20:00:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.174 20:00:57 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:00.174 20:00:57 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:00.174 20:00:57 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:00.174 20:00:57 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:00.174 20:00:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.174 20:00:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.174 20:00:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.174 20:00:57 -- accel/accel.sh@42 -- # jq -r . 00:06:00.174 20:00:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.174 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.174 20:00:57 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:00.174 20:00:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:00.174 20:00:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.174 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.174 ************************************ 00:06:00.174 START TEST accel_missing_filename 00:06:00.174 ************************************ 00:06:00.174 20:00:57 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:00.174 20:00:57 -- common/autotest_common.sh@640 -- # local es=0 00:06:00.174 20:00:57 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:00.174 20:00:57 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:00.174 20:00:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:00.174 20:00:57 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:00.174 20:00:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:00.174 20:00:57 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:00.174 20:00:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:00.174 20:00:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.174 20:00:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.174 20:00:57 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:00.174 20:00:57 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:00.174 20:00:57 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:00.174 20:00:57 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:00.174 20:00:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.174 20:00:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.174 20:00:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.174 20:00:57 -- accel/accel.sh@42 -- # jq -r . 00:06:00.174 [2024-04-25 20:00:57.963384] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:00.174 [2024-04-25 20:00:57.963517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333694 ] 00:06:00.174 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.174 [2024-04-25 20:00:58.081885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.433 [2024-04-25 20:00:58.180692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.434 [2024-04-25 20:00:58.185253] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:00.434 [2024-04-25 20:00:58.193218] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:07.030 [2024-04-25 20:01:04.600250] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.945 [2024-04-25 20:01:06.441300] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:08.945 A filename is required. 00:06:08.945 20:01:06 -- common/autotest_common.sh@643 -- # es=234 00:06:08.945 20:01:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:08.945 20:01:06 -- common/autotest_common.sh@652 -- # es=106 00:06:08.945 20:01:06 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:08.945 20:01:06 -- common/autotest_common.sh@660 -- # es=1 00:06:08.945 20:01:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:08.945 00:06:08.945 real 0m8.673s 00:06:08.945 user 0m2.276s 00:06:08.945 sys 0m0.244s 00:06:08.945 20:01:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.945 20:01:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.945 ************************************ 00:06:08.945 END TEST accel_missing_filename 00:06:08.945 ************************************ 00:06:08.945 20:01:06 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:08.945 20:01:06 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:08.945 20:01:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.945 20:01:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.945 ************************************ 00:06:08.945 START TEST accel_compress_verify 00:06:08.945 ************************************ 00:06:08.945 20:01:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:08.946 20:01:06 -- common/autotest_common.sh@640 -- # local es=0 00:06:08.946 20:01:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:08.946 20:01:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:08.946 20:01:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:08.946 20:01:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:08.946 20:01:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:08.946 20:01:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:08.946 20:01:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:08.946 20:01:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.946 20:01:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.946 20:01:06 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:08.946 20:01:06 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:08.946 20:01:06 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:08.946 20:01:06 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:08.946 20:01:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.946 20:01:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.946 20:01:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.946 20:01:06 -- accel/accel.sh@42 -- # jq -r . 00:06:08.946 [2024-04-25 20:01:06.666534] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:08.946 [2024-04-25 20:01:06.666657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335503 ] 00:06:08.946 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.946 [2024-04-25 20:01:06.782245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.946 [2024-04-25 20:01:06.876584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.207 [2024-04-25 20:01:06.881139] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:09.207 [2024-04-25 20:01:06.889100] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:15.818 [2024-04-25 20:01:13.292543] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.733 [2024-04-25 20:01:15.174778] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:17.733 00:06:17.733 Compression does not support the verify option, aborting. 00:06:17.733 20:01:15 -- common/autotest_common.sh@643 -- # es=161 00:06:17.733 20:01:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.733 20:01:15 -- common/autotest_common.sh@652 -- # es=33 00:06:17.733 20:01:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:17.733 20:01:15 -- common/autotest_common.sh@660 -- # es=1 00:06:17.733 20:01:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.733 00:06:17.733 real 0m8.705s 00:06:17.733 user 0m2.306s 00:06:17.733 sys 0m0.245s 00:06:17.733 20:01:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.733 20:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.733 ************************************ 00:06:17.733 END TEST accel_compress_verify 00:06:17.733 ************************************ 00:06:17.733 20:01:15 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:17.733 20:01:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:17.733 20:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.733 20:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.733 ************************************ 00:06:17.733 START TEST accel_wrong_workload 00:06:17.733 ************************************ 00:06:17.733 20:01:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:17.733 20:01:15 -- common/autotest_common.sh@640 -- # local es=0 00:06:17.733 20:01:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:17.733 20:01:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:17.733 20:01:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.733 20:01:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:17.733 20:01:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.733 20:01:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:17.733 20:01:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:17.733 20:01:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.733 20:01:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.733 20:01:15 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:17.733 20:01:15 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:17.733 20:01:15 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:17.734 20:01:15 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:17.734 20:01:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.734 20:01:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.734 20:01:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.734 20:01:15 -- accel/accel.sh@42 -- # jq -r . 00:06:17.734 Unsupported workload type: foobar 00:06:17.734 [2024-04-25 20:01:15.399786] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:17.734 accel_perf options: 00:06:17.734 [-h help message] 00:06:17.734 [-q queue depth per core] 00:06:17.734 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:17.734 [-T number of threads per core 00:06:17.734 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:17.734 [-t time in seconds] 00:06:17.734 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:17.734 [ dif_verify, , dif_generate, dif_generate_copy 00:06:17.734 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:17.734 [-l for compress/decompress workloads, name of uncompressed input file 00:06:17.734 [-S for crc32c workload, use this seed value (default 0) 00:06:17.734 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:17.734 [-f for fill workload, use this BYTE value (default 255) 00:06:17.734 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:17.734 [-y verify result if this switch is on] 00:06:17.734 [-a tasks to allocate per core (default: same value as -q)] 00:06:17.734 Can be used to spread operations across a wider range of memory. 00:06:17.734 20:01:15 -- common/autotest_common.sh@643 -- # es=1 00:06:17.734 20:01:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.734 20:01:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:17.734 20:01:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.734 00:06:17.734 real 0m0.054s 00:06:17.734 user 0m0.054s 00:06:17.734 sys 0m0.032s 00:06:17.734 20:01:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.734 20:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.734 ************************************ 00:06:17.734 END TEST accel_wrong_workload 00:06:17.734 ************************************ 00:06:17.734 20:01:15 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:17.734 20:01:15 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:17.734 20:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.734 20:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.734 ************************************ 00:06:17.734 START TEST accel_negative_buffers 00:06:17.734 ************************************ 00:06:17.734 20:01:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:17.734 20:01:15 -- common/autotest_common.sh@640 -- # local es=0 00:06:17.734 20:01:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:17.734 20:01:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:17.734 20:01:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.734 20:01:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:17.734 20:01:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.734 20:01:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:17.734 20:01:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:17.734 20:01:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.734 20:01:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.734 20:01:15 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:17.734 20:01:15 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:17.734 20:01:15 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:17.734 20:01:15 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:17.734 20:01:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.734 20:01:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.734 20:01:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.734 20:01:15 -- accel/accel.sh@42 -- # jq -r . 00:06:17.734 -x option must be non-negative. 00:06:17.734 [2024-04-25 20:01:15.484168] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:17.734 accel_perf options: 00:06:17.734 [-h help message] 00:06:17.734 [-q queue depth per core] 00:06:17.734 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:17.734 [-T number of threads per core 00:06:17.734 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:17.734 [-t time in seconds] 00:06:17.734 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:17.734 [ dif_verify, , dif_generate, dif_generate_copy 00:06:17.734 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:17.734 [-l for compress/decompress workloads, name of uncompressed input file 00:06:17.734 [-S for crc32c workload, use this seed value (default 0) 00:06:17.734 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:17.734 [-f for fill workload, use this BYTE value (default 255) 00:06:17.734 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:17.734 [-y verify result if this switch is on] 00:06:17.734 [-a tasks to allocate per core (default: same value as -q)] 00:06:17.734 Can be used to spread operations across a wider range of memory. 00:06:17.734 20:01:15 -- common/autotest_common.sh@643 -- # es=1 00:06:17.734 20:01:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.734 20:01:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:17.734 20:01:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.734 00:06:17.734 real 0m0.052s 00:06:17.734 user 0m0.040s 00:06:17.734 sys 0m0.029s 00:06:17.734 20:01:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.734 20:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.734 ************************************ 00:06:17.734 END TEST accel_negative_buffers 00:06:17.734 ************************************ 00:06:17.734 20:01:15 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:17.734 20:01:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:17.734 20:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.734 20:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.734 ************************************ 00:06:17.734 START TEST accel_crc32c 00:06:17.734 ************************************ 00:06:17.734 20:01:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:17.734 20:01:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.734 20:01:15 -- accel/accel.sh@17 -- # local accel_module 00:06:17.734 20:01:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:17.734 20:01:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:17.734 20:01:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.734 20:01:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.734 20:01:15 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:17.734 20:01:15 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:17.734 20:01:15 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:17.734 20:01:15 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:17.734 20:01:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.734 20:01:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.734 20:01:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.734 20:01:15 -- accel/accel.sh@42 -- # jq -r . 00:06:17.734 [2024-04-25 20:01:15.565935] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:17.734 [2024-04-25 20:01:15.566037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337348 ] 00:06:17.734 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.996 [2024-04-25 20:01:15.681375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.996 [2024-04-25 20:01:15.776662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.996 [2024-04-25 20:01:15.781351] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:17.996 [2024-04-25 20:01:15.789305] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:27.997 20:01:25 -- accel/accel.sh@18 -- # out=' 00:06:27.997 SPDK Configuration: 00:06:27.997 Core mask: 0x1 00:06:27.997 00:06:27.997 Accel Perf Configuration: 00:06:27.997 Workload Type: crc32c 00:06:27.997 CRC-32C seed: 32 00:06:27.997 Transfer size: 4096 bytes 00:06:27.997 Vector count 1 00:06:27.997 Module: dsa 00:06:27.997 Queue depth: 32 00:06:27.997 Allocate depth: 32 00:06:27.997 # threads/core: 1 00:06:27.997 Run time: 1 seconds 00:06:27.997 Verify: Yes 00:06:27.997 00:06:27.997 Running for 1 seconds... 00:06:27.997 00:06:27.997 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.997 ------------------------------------------------------------------------------------ 00:06:27.997 0,0 355616/s 1389 MiB/s 0 0 00:06:27.997 ==================================================================================== 00:06:27.997 Total 355616/s 1389 MiB/s 0 0' 00:06:27.997 20:01:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.997 20:01:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.997 20:01:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:27.997 20:01:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:27.997 20:01:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.997 20:01:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.997 20:01:25 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:27.997 20:01:25 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:27.997 20:01:25 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:27.997 20:01:25 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:27.997 20:01:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.997 20:01:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.997 20:01:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.997 20:01:25 -- accel/accel.sh@42 -- # jq -r . 00:06:27.997 [2024-04-25 20:01:25.239609] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:27.997 [2024-04-25 20:01:25.239741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339170 ] 00:06:27.997 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.997 [2024-04-25 20:01:25.356922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.997 [2024-04-25 20:01:25.449474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.997 [2024-04-25 20:01:25.454056] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:27.997 [2024-04-25 20:01:25.462024] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:34.586 20:01:31 -- accel/accel.sh@21 -- # val= 00:06:34.586 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.586 20:01:31 -- accel/accel.sh@21 -- # val= 00:06:34.586 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.586 20:01:31 -- accel/accel.sh@21 -- # val=0x1 00:06:34.586 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.586 20:01:31 -- accel/accel.sh@21 -- # val= 00:06:34.586 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.586 20:01:31 -- accel/accel.sh@21 -- # val= 00:06:34.586 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.586 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.586 20:01:31 -- accel/accel.sh@21 -- # val=crc32c 00:06:34.586 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.586 20:01:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val=32 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val= 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val=dsa 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@23 -- # accel_module=dsa 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val=32 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val=32 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val=1 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val=Yes 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val= 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:34.587 20:01:31 -- accel/accel.sh@21 -- # val= 00:06:34.587 20:01:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # IFS=: 00:06:34.587 20:01:31 -- accel/accel.sh@20 -- # read -r var val 00:06:37.134 20:01:34 -- accel/accel.sh@21 -- # val= 00:06:37.134 20:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.134 20:01:34 -- accel/accel.sh@21 -- # val= 00:06:37.134 20:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.134 20:01:34 -- accel/accel.sh@21 -- # val= 00:06:37.134 20:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.134 20:01:34 -- accel/accel.sh@21 -- # val= 00:06:37.134 20:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.134 20:01:34 -- accel/accel.sh@21 -- # val= 00:06:37.134 20:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.134 20:01:34 -- accel/accel.sh@21 -- # val= 00:06:37.134 20:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:37.134 20:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:37.134 20:01:34 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:06:37.135 20:01:34 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:37.135 20:01:34 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:06:37.135 00:06:37.135 real 0m19.357s 00:06:37.135 user 0m6.521s 00:06:37.135 sys 0m0.482s 00:06:37.135 20:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.135 20:01:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.135 ************************************ 00:06:37.135 END TEST accel_crc32c 00:06:37.135 ************************************ 00:06:37.135 20:01:34 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:37.135 20:01:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:37.135 20:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.135 20:01:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.135 ************************************ 00:06:37.135 START TEST accel_crc32c_C2 00:06:37.135 ************************************ 00:06:37.135 20:01:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:37.135 20:01:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.135 20:01:34 -- accel/accel.sh@17 -- # local accel_module 00:06:37.135 20:01:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:37.135 20:01:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:37.135 20:01:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.135 20:01:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.135 20:01:34 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:37.135 20:01:34 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:37.135 20:01:34 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:37.135 20:01:34 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:37.135 20:01:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.135 20:01:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.135 20:01:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.135 20:01:34 -- accel/accel.sh@42 -- # jq -r . 00:06:37.135 [2024-04-25 20:01:34.952689] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:37.135 [2024-04-25 20:01:34.952814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1341255 ] 00:06:37.135 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.393 [2024-04-25 20:01:35.068041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.393 [2024-04-25 20:01:35.156891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.393 [2024-04-25 20:01:35.161432] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:37.393 [2024-04-25 20:01:35.169397] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:47.386 20:01:44 -- accel/accel.sh@18 -- # out=' 00:06:47.386 SPDK Configuration: 00:06:47.386 Core mask: 0x1 00:06:47.386 00:06:47.386 Accel Perf Configuration: 00:06:47.386 Workload Type: crc32c 00:06:47.386 CRC-32C seed: 0 00:06:47.386 Transfer size: 4096 bytes 00:06:47.386 Vector count 2 00:06:47.386 Module: dsa 00:06:47.386 Queue depth: 32 00:06:47.386 Allocate depth: 32 00:06:47.386 # threads/core: 1 00:06:47.386 Run time: 1 seconds 00:06:47.386 Verify: Yes 00:06:47.386 00:06:47.386 Running for 1 seconds... 00:06:47.386 00:06:47.386 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.386 ------------------------------------------------------------------------------------ 00:06:47.386 0,0 242657/s 1895 MiB/s 0 0 00:06:47.386 ==================================================================================== 00:06:47.386 Total 242657/s 947 MiB/s 0 0' 00:06:47.386 20:01:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.386 20:01:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.386 20:01:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:47.386 20:01:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:47.386 20:01:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.386 20:01:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.386 20:01:44 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:47.386 20:01:44 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:47.386 20:01:44 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:47.386 20:01:44 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:47.386 20:01:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.386 20:01:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.386 20:01:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.386 20:01:44 -- accel/accel.sh@42 -- # jq -r . 00:06:47.386 [2024-04-25 20:01:44.620765] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:47.386 [2024-04-25 20:01:44.620894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343062 ] 00:06:47.386 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.386 [2024-04-25 20:01:44.737451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.386 [2024-04-25 20:01:44.835305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.386 [2024-04-25 20:01:44.839878] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:47.386 [2024-04-25 20:01:44.847843] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val= 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val= 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val=0x1 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val= 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val= 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val=crc32c 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val=0 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val= 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val=dsa 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@23 -- # accel_module=dsa 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val=32 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val=32 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val=1 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val=Yes 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val= 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.974 20:01:51 -- accel/accel.sh@21 -- # val= 00:06:53.974 20:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.974 20:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:56.601 20:01:54 -- accel/accel.sh@21 -- # val= 00:06:56.601 20:01:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.601 20:01:54 -- accel/accel.sh@21 -- # val= 00:06:56.601 20:01:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.601 20:01:54 -- accel/accel.sh@21 -- # val= 00:06:56.601 20:01:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.601 20:01:54 -- accel/accel.sh@21 -- # val= 00:06:56.601 20:01:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.601 20:01:54 -- accel/accel.sh@21 -- # val= 00:06:56.601 20:01:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.601 20:01:54 -- accel/accel.sh@21 -- # val= 00:06:56.601 20:01:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.601 20:01:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.601 20:01:54 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:06:56.601 20:01:54 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:56.601 20:01:54 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:06:56.601 00:06:56.601 real 0m19.394s 00:06:56.601 user 0m6.571s 00:06:56.601 sys 0m0.457s 00:06:56.601 20:01:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.601 20:01:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.601 ************************************ 00:06:56.601 END TEST accel_crc32c_C2 00:06:56.601 ************************************ 00:06:56.601 20:01:54 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:56.601 20:01:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:56.601 20:01:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.601 20:01:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.601 ************************************ 00:06:56.601 START TEST accel_copy 00:06:56.601 ************************************ 00:06:56.601 20:01:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:56.601 20:01:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.601 20:01:54 -- accel/accel.sh@17 -- # local accel_module 00:06:56.601 20:01:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:56.601 20:01:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:56.601 20:01:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.601 20:01:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.601 20:01:54 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:56.601 20:01:54 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:56.601 20:01:54 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:56.601 20:01:54 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:56.601 20:01:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.601 20:01:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.601 20:01:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.601 20:01:54 -- accel/accel.sh@42 -- # jq -r . 00:06:56.601 [2024-04-25 20:01:54.373789] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:56.601 [2024-04-25 20:01:54.373910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345160 ] 00:06:56.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.601 [2024-04-25 20:01:54.489519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.863 [2024-04-25 20:01:54.593330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.863 [2024-04-25 20:01:54.597891] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:56.863 [2024-04-25 20:01:54.605854] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:06.870 20:02:04 -- accel/accel.sh@18 -- # out=' 00:07:06.870 SPDK Configuration: 00:07:06.870 Core mask: 0x1 00:07:06.870 00:07:06.870 Accel Perf Configuration: 00:07:06.870 Workload Type: copy 00:07:06.870 Transfer size: 4096 bytes 00:07:06.870 Vector count 1 00:07:06.870 Module: dsa 00:07:06.870 Queue depth: 32 00:07:06.870 Allocate depth: 32 00:07:06.870 # threads/core: 1 00:07:06.870 Run time: 1 seconds 00:07:06.870 Verify: Yes 00:07:06.870 00:07:06.870 Running for 1 seconds... 00:07:06.870 00:07:06.870 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.870 ------------------------------------------------------------------------------------ 00:07:06.870 0,0 225184/s 879 MiB/s 0 0 00:07:06.870 ==================================================================================== 00:07:06.870 Total 225184/s 879 MiB/s 0 0' 00:07:06.870 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.870 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.870 20:02:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:06.870 20:02:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:06.870 20:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.870 20:02:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.870 20:02:04 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:06.870 20:02:04 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:06.870 20:02:04 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:06.870 20:02:04 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:06.870 20:02:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.870 20:02:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.870 20:02:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.870 20:02:04 -- accel/accel.sh@42 -- # jq -r . 00:07:06.870 [2024-04-25 20:02:04.064167] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:06.870 [2024-04-25 20:02:04.064293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346962 ] 00:07:06.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.870 [2024-04-25 20:02:04.178278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.870 [2024-04-25 20:02:04.273026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.870 [2024-04-25 20:02:04.277599] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:06.870 [2024-04-25 20:02:04.285564] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val=0x1 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val=copy 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val=dsa 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val=32 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val=32 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val=1 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val=Yes 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.457 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:13.457 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:13.457 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:16.003 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:16.003 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.003 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:16.003 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.003 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:16.003 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.003 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:16.003 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.003 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:16.003 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.003 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:16.003 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.003 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.003 20:02:13 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:16.003 20:02:13 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:16.003 20:02:13 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:16.003 00:07:16.003 real 0m19.382s 00:07:16.004 user 0m6.531s 00:07:16.004 sys 0m0.492s 00:07:16.004 20:02:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.004 20:02:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.004 ************************************ 00:07:16.004 END TEST accel_copy 00:07:16.004 ************************************ 00:07:16.004 20:02:13 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.004 20:02:13 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:16.004 20:02:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.004 20:02:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.004 ************************************ 00:07:16.004 START TEST accel_fill 00:07:16.004 ************************************ 00:07:16.004 20:02:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.004 20:02:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.004 20:02:13 -- accel/accel.sh@17 -- # local accel_module 00:07:16.004 20:02:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.004 20:02:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.004 20:02:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.004 20:02:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.004 20:02:13 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:16.004 20:02:13 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:16.004 20:02:13 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:16.004 20:02:13 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:16.004 20:02:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.004 20:02:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.004 20:02:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.004 20:02:13 -- accel/accel.sh@42 -- # jq -r . 00:07:16.004 [2024-04-25 20:02:13.788723] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:16.004 [2024-04-25 20:02:13.788851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348897 ] 00:07:16.004 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.004 [2024-04-25 20:02:13.905505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.265 [2024-04-25 20:02:14.010722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.265 [2024-04-25 20:02:14.015283] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:16.265 [2024-04-25 20:02:14.023241] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:26.269 20:02:23 -- accel/accel.sh@18 -- # out=' 00:07:26.269 SPDK Configuration: 00:07:26.269 Core mask: 0x1 00:07:26.269 00:07:26.269 Accel Perf Configuration: 00:07:26.269 Workload Type: fill 00:07:26.269 Fill pattern: 0x80 00:07:26.269 Transfer size: 4096 bytes 00:07:26.269 Vector count 1 00:07:26.269 Module: dsa 00:07:26.269 Queue depth: 64 00:07:26.269 Allocate depth: 64 00:07:26.269 # threads/core: 1 00:07:26.269 Run time: 1 seconds 00:07:26.269 Verify: Yes 00:07:26.269 00:07:26.269 Running for 1 seconds... 00:07:26.269 00:07:26.269 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.269 ------------------------------------------------------------------------------------ 00:07:26.269 0,0 341315/s 1333 MiB/s 0 0 00:07:26.269 ==================================================================================== 00:07:26.269 Total 341315/s 1333 MiB/s 0 0' 00:07:26.269 20:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.269 20:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.269 20:02:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.269 20:02:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:26.269 20:02:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.269 20:02:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.269 20:02:23 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:26.269 20:02:23 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:26.269 20:02:23 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:26.269 20:02:23 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:26.269 20:02:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.269 20:02:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.269 20:02:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.269 20:02:23 -- accel/accel.sh@42 -- # jq -r . 00:07:26.270 [2024-04-25 20:02:23.505457] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:26.270 [2024-04-25 20:02:23.505592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350862 ] 00:07:26.270 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.270 [2024-04-25 20:02:23.622921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.270 [2024-04-25 20:02:23.718753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.270 [2024-04-25 20:02:23.723305] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:26.270 [2024-04-25 20:02:23.731270] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val=0x1 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val=fill 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val=0x80 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val=dsa 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val=64 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val=64 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val=1 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val=Yes 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.861 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:32.861 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.861 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:35.406 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:35.406 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.406 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:35.406 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.406 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:35.406 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.406 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:35.406 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.406 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:35.406 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.406 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:35.406 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.406 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.406 20:02:33 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:35.406 20:02:33 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:35.406 20:02:33 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:35.406 00:07:35.406 real 0m19.402s 00:07:35.406 user 0m6.539s 00:07:35.406 sys 0m0.492s 00:07:35.406 20:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.406 20:02:33 -- common/autotest_common.sh@10 -- # set +x 00:07:35.406 ************************************ 00:07:35.406 END TEST accel_fill 00:07:35.406 ************************************ 00:07:35.406 20:02:33 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:35.406 20:02:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:35.406 20:02:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.406 20:02:33 -- common/autotest_common.sh@10 -- # set +x 00:07:35.406 ************************************ 00:07:35.406 START TEST accel_copy_crc32c 00:07:35.406 ************************************ 00:07:35.407 20:02:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:35.407 20:02:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.407 20:02:33 -- accel/accel.sh@17 -- # local accel_module 00:07:35.407 20:02:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:35.407 20:02:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:35.407 20:02:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.407 20:02:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.407 20:02:33 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:35.407 20:02:33 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:35.407 20:02:33 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:35.407 20:02:33 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:35.407 20:02:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.407 20:02:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.407 20:02:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.407 20:02:33 -- accel/accel.sh@42 -- # jq -r . 00:07:35.407 [2024-04-25 20:02:33.212937] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:35.407 [2024-04-25 20:02:33.213061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352699 ] 00:07:35.407 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.407 [2024-04-25 20:02:33.329920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.666 [2024-04-25 20:02:33.421920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.666 [2024-04-25 20:02:33.426475] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:35.666 [2024-04-25 20:02:33.434442] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:45.708 20:02:42 -- accel/accel.sh@18 -- # out=' 00:07:45.708 SPDK Configuration: 00:07:45.708 Core mask: 0x1 00:07:45.708 00:07:45.708 Accel Perf Configuration: 00:07:45.708 Workload Type: copy_crc32c 00:07:45.708 CRC-32C seed: 0 00:07:45.708 Vector size: 4096 bytes 00:07:45.708 Transfer size: 4096 bytes 00:07:45.708 Vector count 1 00:07:45.708 Module: dsa 00:07:45.708 Queue depth: 32 00:07:45.708 Allocate depth: 32 00:07:45.708 # threads/core: 1 00:07:45.708 Run time: 1 seconds 00:07:45.708 Verify: Yes 00:07:45.708 00:07:45.708 Running for 1 seconds... 00:07:45.708 00:07:45.708 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.708 ------------------------------------------------------------------------------------ 00:07:45.708 0,0 210272/s 821 MiB/s 0 0 00:07:45.708 ==================================================================================== 00:07:45.708 Total 210272/s 821 MiB/s 0 0' 00:07:45.708 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.708 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.708 20:02:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:45.708 20:02:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:45.708 20:02:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.708 20:02:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.708 20:02:42 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:45.708 20:02:42 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:45.708 20:02:42 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:45.708 20:02:42 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:45.708 20:02:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.708 20:02:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.708 20:02:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.708 20:02:42 -- accel/accel.sh@42 -- # jq -r . 00:07:45.708 [2024-04-25 20:02:42.895789] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:45.708 [2024-04-25 20:02:42.895934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354766 ] 00:07:45.708 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.708 [2024-04-25 20:02:43.026316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.708 [2024-04-25 20:02:43.119209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.708 [2024-04-25 20:02:43.123823] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:45.708 [2024-04-25 20:02:43.131777] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val= 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val= 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val=0x1 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val= 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val= 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val=0 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val= 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val=dsa 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val=32 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val=32 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val=1 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val=Yes 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val= 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:52.281 20:02:49 -- accel/accel.sh@21 -- # val= 00:07:52.281 20:02:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # IFS=: 00:07:52.281 20:02:49 -- accel/accel.sh@20 -- # read -r var val 00:07:54.816 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.816 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.816 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.816 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.816 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.816 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.816 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.816 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.816 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.816 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.816 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.816 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.816 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.816 20:02:52 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:54.816 20:02:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:54.816 20:02:52 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:54.816 00:07:54.816 real 0m19.366s 00:07:54.816 user 0m6.525s 00:07:54.816 sys 0m0.489s 00:07:54.816 20:02:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.816 20:02:52 -- common/autotest_common.sh@10 -- # set +x 00:07:54.816 ************************************ 00:07:54.816 END TEST accel_copy_crc32c 00:07:54.816 ************************************ 00:07:54.816 20:02:52 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:54.816 20:02:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:54.816 20:02:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.816 20:02:52 -- common/autotest_common.sh@10 -- # set +x 00:07:54.816 ************************************ 00:07:54.816 START TEST accel_copy_crc32c_C2 00:07:54.816 ************************************ 00:07:54.816 20:02:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:54.816 20:02:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.816 20:02:52 -- accel/accel.sh@17 -- # local accel_module 00:07:54.816 20:02:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:54.816 20:02:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:54.816 20:02:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.816 20:02:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.816 20:02:52 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:54.816 20:02:52 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:54.816 20:02:52 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:54.816 20:02:52 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:54.816 20:02:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.816 20:02:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.816 20:02:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.816 20:02:52 -- accel/accel.sh@42 -- # jq -r . 00:07:54.816 [2024-04-25 20:02:52.612974] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:54.816 [2024-04-25 20:02:52.613065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356595 ] 00:07:54.816 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.816 [2024-04-25 20:02:52.700968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.076 [2024-04-25 20:02:52.795301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.076 [2024-04-25 20:02:52.799974] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:55.076 [2024-04-25 20:02:52.807936] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:05.078 20:03:02 -- accel/accel.sh@18 -- # out=' 00:08:05.078 SPDK Configuration: 00:08:05.078 Core mask: 0x1 00:08:05.078 00:08:05.078 Accel Perf Configuration: 00:08:05.078 Workload Type: copy_crc32c 00:08:05.078 CRC-32C seed: 0 00:08:05.078 Vector size: 4096 bytes 00:08:05.078 Transfer size: 8192 bytes 00:08:05.078 Vector count 2 00:08:05.078 Module: dsa 00:08:05.078 Queue depth: 32 00:08:05.078 Allocate depth: 32 00:08:05.078 # threads/core: 1 00:08:05.078 Run time: 1 seconds 00:08:05.078 Verify: Yes 00:08:05.078 00:08:05.078 Running for 1 seconds... 00:08:05.078 00:08:05.078 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:05.078 ------------------------------------------------------------------------------------ 00:08:05.078 0,0 139157/s 1087 MiB/s 0 0 00:08:05.078 ==================================================================================== 00:08:05.078 Total 139157/s 543 MiB/s 0 0' 00:08:05.078 20:03:02 -- accel/accel.sh@20 -- # IFS=: 00:08:05.078 20:03:02 -- accel/accel.sh@20 -- # read -r var val 00:08:05.078 20:03:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:05.078 20:03:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:05.078 20:03:02 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.078 20:03:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.078 20:03:02 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:05.078 20:03:02 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:05.078 20:03:02 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:05.078 20:03:02 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:05.078 20:03:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.078 20:03:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.078 20:03:02 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.078 20:03:02 -- accel/accel.sh@42 -- # jq -r . 00:08:05.078 [2024-04-25 20:03:02.296496] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:05.078 [2024-04-25 20:03:02.296626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358735 ] 00:08:05.078 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.078 [2024-04-25 20:03:02.409677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.078 [2024-04-25 20:03:02.505042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.078 [2024-04-25 20:03:02.509554] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:05.078 [2024-04-25 20:03:02.517529] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val= 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val= 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val=0x1 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val= 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val= 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val=0 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val= 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val=dsa 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val=32 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val=32 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val=1 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val=Yes 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val= 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.738 20:03:08 -- accel/accel.sh@21 -- # val= 00:08:11.738 20:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.738 20:03:08 -- accel/accel.sh@20 -- # read -r var val 00:08:14.275 20:03:11 -- accel/accel.sh@21 -- # val= 00:08:14.275 20:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.275 20:03:11 -- accel/accel.sh@21 -- # val= 00:08:14.275 20:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.275 20:03:11 -- accel/accel.sh@21 -- # val= 00:08:14.275 20:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.275 20:03:11 -- accel/accel.sh@21 -- # val= 00:08:14.275 20:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.275 20:03:11 -- accel/accel.sh@21 -- # val= 00:08:14.275 20:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.275 20:03:11 -- accel/accel.sh@21 -- # val= 00:08:14.275 20:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.275 20:03:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.275 20:03:11 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:14.275 20:03:11 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:14.275 20:03:11 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:14.275 00:08:14.275 real 0m19.370s 00:08:14.275 user 0m6.566s 00:08:14.275 sys 0m0.436s 00:08:14.275 20:03:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.275 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:08:14.275 ************************************ 00:08:14.275 END TEST accel_copy_crc32c_C2 00:08:14.275 ************************************ 00:08:14.275 20:03:11 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:14.275 20:03:11 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:14.275 20:03:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.275 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:08:14.275 ************************************ 00:08:14.275 START TEST accel_dualcast 00:08:14.275 ************************************ 00:08:14.275 20:03:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:14.275 20:03:11 -- accel/accel.sh@16 -- # local accel_opc 00:08:14.275 20:03:11 -- accel/accel.sh@17 -- # local accel_module 00:08:14.275 20:03:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:14.275 20:03:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:14.275 20:03:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.275 20:03:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:14.275 20:03:11 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:14.275 20:03:11 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:14.275 20:03:11 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:14.275 20:03:11 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:14.275 20:03:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:14.275 20:03:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:14.275 20:03:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:14.275 20:03:11 -- accel/accel.sh@42 -- # jq -r . 00:08:14.275 [2024-04-25 20:03:12.022833] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:14.275 [2024-04-25 20:03:12.022952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1361062 ] 00:08:14.275 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.275 [2024-04-25 20:03:12.147309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.535 [2024-04-25 20:03:12.241584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.535 [2024-04-25 20:03:12.246183] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:14.535 [2024-04-25 20:03:12.254137] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:24.529 20:03:21 -- accel/accel.sh@18 -- # out=' 00:08:24.529 SPDK Configuration: 00:08:24.529 Core mask: 0x1 00:08:24.529 00:08:24.529 Accel Perf Configuration: 00:08:24.529 Workload Type: dualcast 00:08:24.529 Transfer size: 4096 bytes 00:08:24.529 Vector count 1 00:08:24.529 Module: dsa 00:08:24.529 Queue depth: 32 00:08:24.529 Allocate depth: 32 00:08:24.529 # threads/core: 1 00:08:24.529 Run time: 1 seconds 00:08:24.529 Verify: Yes 00:08:24.529 00:08:24.529 Running for 1 seconds... 00:08:24.529 00:08:24.529 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:24.529 ------------------------------------------------------------------------------------ 00:08:24.529 0,0 221344/s 864 MiB/s 0 0 00:08:24.529 ==================================================================================== 00:08:24.529 Total 221344/s 864 MiB/s 0 0' 00:08:24.529 20:03:21 -- accel/accel.sh@20 -- # IFS=: 00:08:24.529 20:03:21 -- accel/accel.sh@20 -- # read -r var val 00:08:24.529 20:03:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:24.529 20:03:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:24.529 20:03:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:24.529 20:03:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:24.529 20:03:21 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:24.529 20:03:21 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:24.529 20:03:21 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:24.529 20:03:21 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:24.529 20:03:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:24.529 20:03:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:24.529 20:03:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:24.529 20:03:21 -- accel/accel.sh@42 -- # jq -r . 00:08:24.529 [2024-04-25 20:03:21.711266] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:24.529 [2024-04-25 20:03:21.711383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362861 ] 00:08:24.529 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.529 [2024-04-25 20:03:21.824154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.529 [2024-04-25 20:03:21.919096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.529 [2024-04-25 20:03:21.923737] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:24.529 [2024-04-25 20:03:21.931705] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val= 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val= 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val=0x1 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val= 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val= 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val=dualcast 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val= 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val=dsa 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val=32 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val=32 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val=1 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val=Yes 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val= 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:31.107 20:03:28 -- accel/accel.sh@21 -- # val= 00:08:31.107 20:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # IFS=: 00:08:31.107 20:03:28 -- accel/accel.sh@20 -- # read -r var val 00:08:33.651 20:03:31 -- accel/accel.sh@21 -- # val= 00:08:33.651 20:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # IFS=: 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # read -r var val 00:08:33.651 20:03:31 -- accel/accel.sh@21 -- # val= 00:08:33.651 20:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # IFS=: 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # read -r var val 00:08:33.651 20:03:31 -- accel/accel.sh@21 -- # val= 00:08:33.651 20:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # IFS=: 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # read -r var val 00:08:33.651 20:03:31 -- accel/accel.sh@21 -- # val= 00:08:33.651 20:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # IFS=: 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # read -r var val 00:08:33.651 20:03:31 -- accel/accel.sh@21 -- # val= 00:08:33.651 20:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # IFS=: 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # read -r var val 00:08:33.651 20:03:31 -- accel/accel.sh@21 -- # val= 00:08:33.651 20:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # IFS=: 00:08:33.651 20:03:31 -- accel/accel.sh@20 -- # read -r var val 00:08:33.651 20:03:31 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:33.651 20:03:31 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:33.651 20:03:31 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:33.651 00:08:33.651 real 0m19.366s 00:08:33.651 user 0m6.543s 00:08:33.651 sys 0m0.471s 00:08:33.651 20:03:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.651 20:03:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.651 ************************************ 00:08:33.651 END TEST accel_dualcast 00:08:33.651 ************************************ 00:08:33.651 20:03:31 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:33.651 20:03:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:33.651 20:03:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.651 20:03:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.651 ************************************ 00:08:33.651 START TEST accel_compare 00:08:33.651 ************************************ 00:08:33.651 20:03:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:33.651 20:03:31 -- accel/accel.sh@16 -- # local accel_opc 00:08:33.651 20:03:31 -- accel/accel.sh@17 -- # local accel_module 00:08:33.651 20:03:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:33.651 20:03:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:33.651 20:03:31 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.651 20:03:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:33.651 20:03:31 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:33.651 20:03:31 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:33.651 20:03:31 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:33.651 20:03:31 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:33.651 20:03:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:33.651 20:03:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:33.651 20:03:31 -- accel/accel.sh@41 -- # local IFS=, 00:08:33.651 20:03:31 -- accel/accel.sh@42 -- # jq -r . 00:08:33.651 [2024-04-25 20:03:31.418480] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:33.651 [2024-04-25 20:03:31.418612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364959 ] 00:08:33.651 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.651 [2024-04-25 20:03:31.533608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.911 [2024-04-25 20:03:31.628246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.911 [2024-04-25 20:03:31.632813] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:33.911 [2024-04-25 20:03:31.640780] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:43.915 20:03:41 -- accel/accel.sh@18 -- # out=' 00:08:43.915 SPDK Configuration: 00:08:43.915 Core mask: 0x1 00:08:43.915 00:08:43.915 Accel Perf Configuration: 00:08:43.915 Workload Type: compare 00:08:43.915 Transfer size: 4096 bytes 00:08:43.915 Vector count 1 00:08:43.915 Module: dsa 00:08:43.915 Queue depth: 32 00:08:43.915 Allocate depth: 32 00:08:43.915 # threads/core: 1 00:08:43.915 Run time: 1 seconds 00:08:43.915 Verify: Yes 00:08:43.915 00:08:43.915 Running for 1 seconds... 00:08:43.915 00:08:43.915 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:43.915 ------------------------------------------------------------------------------------ 00:08:43.915 0,0 228832/s 893 MiB/s 0 0 00:08:43.915 ==================================================================================== 00:08:43.915 Total 228832/s 893 MiB/s 0 0' 00:08:43.915 20:03:41 -- accel/accel.sh@20 -- # IFS=: 00:08:43.915 20:03:41 -- accel/accel.sh@20 -- # read -r var val 00:08:43.915 20:03:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:43.915 20:03:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:43.915 20:03:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:43.915 20:03:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:43.915 20:03:41 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:43.915 20:03:41 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:43.915 20:03:41 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:43.915 20:03:41 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:43.915 20:03:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:43.915 20:03:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:43.915 20:03:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:43.915 20:03:41 -- accel/accel.sh@42 -- # jq -r . 00:08:43.915 [2024-04-25 20:03:41.088747] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:43.915 [2024-04-25 20:03:41.088875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366757 ] 00:08:43.915 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.915 [2024-04-25 20:03:41.204055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.915 [2024-04-25 20:03:41.302617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.915 [2024-04-25 20:03:41.307180] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:43.915 [2024-04-25 20:03:41.315145] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val= 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val= 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val=0x1 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val= 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val= 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val=compare 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val= 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val=dsa 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val=32 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val=32 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val=1 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val=Yes 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val= 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:50.503 20:03:47 -- accel/accel.sh@21 -- # val= 00:08:50.503 20:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # IFS=: 00:08:50.503 20:03:47 -- accel/accel.sh@20 -- # read -r var val 00:08:53.046 20:03:50 -- accel/accel.sh@21 -- # val= 00:08:53.046 20:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # IFS=: 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # read -r var val 00:08:53.046 20:03:50 -- accel/accel.sh@21 -- # val= 00:08:53.046 20:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # IFS=: 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # read -r var val 00:08:53.046 20:03:50 -- accel/accel.sh@21 -- # val= 00:08:53.046 20:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # IFS=: 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # read -r var val 00:08:53.046 20:03:50 -- accel/accel.sh@21 -- # val= 00:08:53.046 20:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # IFS=: 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # read -r var val 00:08:53.046 20:03:50 -- accel/accel.sh@21 -- # val= 00:08:53.046 20:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # IFS=: 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # read -r var val 00:08:53.046 20:03:50 -- accel/accel.sh@21 -- # val= 00:08:53.046 20:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # IFS=: 00:08:53.046 20:03:50 -- accel/accel.sh@20 -- # read -r var val 00:08:53.046 20:03:50 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:53.046 20:03:50 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:53.046 20:03:50 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:53.046 00:08:53.046 real 0m19.393s 00:08:53.046 user 0m6.568s 00:08:53.046 sys 0m0.464s 00:08:53.046 20:03:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.046 20:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:53.046 ************************************ 00:08:53.046 END TEST accel_compare 00:08:53.046 ************************************ 00:08:53.046 20:03:50 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:53.046 20:03:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:53.046 20:03:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:53.046 20:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:53.046 ************************************ 00:08:53.046 START TEST accel_xor 00:08:53.046 ************************************ 00:08:53.046 20:03:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:53.046 20:03:50 -- accel/accel.sh@16 -- # local accel_opc 00:08:53.046 20:03:50 -- accel/accel.sh@17 -- # local accel_module 00:08:53.046 20:03:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:53.046 20:03:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:53.046 20:03:50 -- accel/accel.sh@12 -- # build_accel_config 00:08:53.046 20:03:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:53.046 20:03:50 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:53.046 20:03:50 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:53.046 20:03:50 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:53.046 20:03:50 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:53.046 20:03:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:53.046 20:03:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:53.046 20:03:50 -- accel/accel.sh@41 -- # local IFS=, 00:08:53.046 20:03:50 -- accel/accel.sh@42 -- # jq -r . 00:08:53.046 [2024-04-25 20:03:50.846516] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:53.046 [2024-04-25 20:03:50.846642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1368770 ] 00:08:53.046 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.046 [2024-04-25 20:03:50.962627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.307 [2024-04-25 20:03:51.056993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.307 [2024-04-25 20:03:51.061551] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:53.307 [2024-04-25 20:03:51.069513] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:03.300 20:04:00 -- accel/accel.sh@18 -- # out=' 00:09:03.300 SPDK Configuration: 00:09:03.300 Core mask: 0x1 00:09:03.300 00:09:03.300 Accel Perf Configuration: 00:09:03.300 Workload Type: xor 00:09:03.300 Source buffers: 2 00:09:03.300 Transfer size: 4096 bytes 00:09:03.300 Vector count 1 00:09:03.300 Module: software 00:09:03.300 Queue depth: 32 00:09:03.300 Allocate depth: 32 00:09:03.300 # threads/core: 1 00:09:03.300 Run time: 1 seconds 00:09:03.300 Verify: Yes 00:09:03.300 00:09:03.300 Running for 1 seconds... 00:09:03.300 00:09:03.300 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:03.300 ------------------------------------------------------------------------------------ 00:09:03.300 0,0 443552/s 1732 MiB/s 0 0 00:09:03.300 ==================================================================================== 00:09:03.300 Total 443552/s 1732 MiB/s 0 0' 00:09:03.300 20:04:00 -- accel/accel.sh@20 -- # IFS=: 00:09:03.300 20:04:00 -- accel/accel.sh@20 -- # read -r var val 00:09:03.300 20:04:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:03.300 20:04:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:03.300 20:04:00 -- accel/accel.sh@12 -- # build_accel_config 00:09:03.300 20:04:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:03.300 20:04:00 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:03.300 20:04:00 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:03.300 20:04:00 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:03.300 20:04:00 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:03.300 20:04:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:03.300 20:04:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:03.300 20:04:00 -- accel/accel.sh@41 -- # local IFS=, 00:09:03.300 20:04:00 -- accel/accel.sh@42 -- # jq -r . 00:09:03.300 [2024-04-25 20:04:00.507248] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:03.300 [2024-04-25 20:04:00.507375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1370664 ] 00:09:03.300 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.300 [2024-04-25 20:04:00.621886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.300 [2024-04-25 20:04:00.716199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.300 [2024-04-25 20:04:00.720760] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:03.300 [2024-04-25 20:04:00.728722] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val= 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val= 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val=0x1 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val= 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val= 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val=xor 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val=2 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val= 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val=software 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@23 -- # accel_module=software 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val=32 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val=32 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val=1 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val=Yes 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val= 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:09.876 20:04:07 -- accel/accel.sh@21 -- # val= 00:09:09.876 20:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # IFS=: 00:09:09.876 20:04:07 -- accel/accel.sh@20 -- # read -r var val 00:09:12.489 20:04:10 -- accel/accel.sh@21 -- # val= 00:09:12.489 20:04:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # IFS=: 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # read -r var val 00:09:12.489 20:04:10 -- accel/accel.sh@21 -- # val= 00:09:12.489 20:04:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # IFS=: 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # read -r var val 00:09:12.489 20:04:10 -- accel/accel.sh@21 -- # val= 00:09:12.489 20:04:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # IFS=: 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # read -r var val 00:09:12.489 20:04:10 -- accel/accel.sh@21 -- # val= 00:09:12.489 20:04:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # IFS=: 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # read -r var val 00:09:12.489 20:04:10 -- accel/accel.sh@21 -- # val= 00:09:12.489 20:04:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # IFS=: 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # read -r var val 00:09:12.489 20:04:10 -- accel/accel.sh@21 -- # val= 00:09:12.489 20:04:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # IFS=: 00:09:12.489 20:04:10 -- accel/accel.sh@20 -- # read -r var val 00:09:12.489 20:04:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:12.489 20:04:10 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:12.489 20:04:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:12.489 00:09:12.489 real 0m19.357s 00:09:12.489 user 0m6.533s 00:09:12.489 sys 0m0.464s 00:09:12.489 20:04:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.489 20:04:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.489 ************************************ 00:09:12.489 END TEST accel_xor 00:09:12.489 ************************************ 00:09:12.489 20:04:10 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:12.489 20:04:10 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:12.489 20:04:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.489 20:04:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.489 ************************************ 00:09:12.489 START TEST accel_xor 00:09:12.489 ************************************ 00:09:12.489 20:04:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:09:12.489 20:04:10 -- accel/accel.sh@16 -- # local accel_opc 00:09:12.489 20:04:10 -- accel/accel.sh@17 -- # local accel_module 00:09:12.489 20:04:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:09:12.489 20:04:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:12.489 20:04:10 -- accel/accel.sh@12 -- # build_accel_config 00:09:12.489 20:04:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:12.489 20:04:10 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:12.489 20:04:10 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:12.489 20:04:10 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:12.489 20:04:10 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:12.489 20:04:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:12.489 20:04:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:12.489 20:04:10 -- accel/accel.sh@41 -- # local IFS=, 00:09:12.489 20:04:10 -- accel/accel.sh@42 -- # jq -r . 00:09:12.489 [2024-04-25 20:04:10.239677] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:12.489 [2024-04-25 20:04:10.239804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372576 ] 00:09:12.489 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.489 [2024-04-25 20:04:10.356091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.748 [2024-04-25 20:04:10.451923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.748 [2024-04-25 20:04:10.456512] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:12.748 [2024-04-25 20:04:10.464466] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:22.737 20:04:19 -- accel/accel.sh@18 -- # out=' 00:09:22.737 SPDK Configuration: 00:09:22.737 Core mask: 0x1 00:09:22.737 00:09:22.737 Accel Perf Configuration: 00:09:22.737 Workload Type: xor 00:09:22.737 Source buffers: 3 00:09:22.737 Transfer size: 4096 bytes 00:09:22.737 Vector count 1 00:09:22.737 Module: software 00:09:22.737 Queue depth: 32 00:09:22.737 Allocate depth: 32 00:09:22.737 # threads/core: 1 00:09:22.737 Run time: 1 seconds 00:09:22.737 Verify: Yes 00:09:22.737 00:09:22.737 Running for 1 seconds... 00:09:22.737 00:09:22.737 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:22.737 ------------------------------------------------------------------------------------ 00:09:22.737 0,0 433664/s 1694 MiB/s 0 0 00:09:22.737 ==================================================================================== 00:09:22.737 Total 433664/s 1694 MiB/s 0 0' 00:09:22.737 20:04:19 -- accel/accel.sh@20 -- # IFS=: 00:09:22.737 20:04:19 -- accel/accel.sh@20 -- # read -r var val 00:09:22.737 20:04:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:22.737 20:04:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:22.737 20:04:19 -- accel/accel.sh@12 -- # build_accel_config 00:09:22.737 20:04:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:22.737 20:04:19 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:22.737 20:04:19 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:22.737 20:04:19 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:22.737 20:04:19 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:22.737 20:04:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:22.737 20:04:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:22.737 20:04:19 -- accel/accel.sh@41 -- # local IFS=, 00:09:22.737 20:04:19 -- accel/accel.sh@42 -- # jq -r . 00:09:22.737 [2024-04-25 20:04:19.916558] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:22.737 [2024-04-25 20:04:19.916686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1374565 ] 00:09:22.737 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.737 [2024-04-25 20:04:20.035192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.737 [2024-04-25 20:04:20.129417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.737 [2024-04-25 20:04:20.133934] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:22.737 [2024-04-25 20:04:20.141903] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val= 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val= 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val=0x1 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val= 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val= 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val=xor 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val=3 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val= 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val=software 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@23 -- # accel_module=software 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val=32 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val=32 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val=1 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val=Yes 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val= 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:29.323 20:04:26 -- accel/accel.sh@21 -- # val= 00:09:29.323 20:04:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # IFS=: 00:09:29.323 20:04:26 -- accel/accel.sh@20 -- # read -r var val 00:09:31.864 20:04:29 -- accel/accel.sh@21 -- # val= 00:09:31.864 20:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # IFS=: 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # read -r var val 00:09:31.864 20:04:29 -- accel/accel.sh@21 -- # val= 00:09:31.864 20:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # IFS=: 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # read -r var val 00:09:31.864 20:04:29 -- accel/accel.sh@21 -- # val= 00:09:31.864 20:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # IFS=: 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # read -r var val 00:09:31.864 20:04:29 -- accel/accel.sh@21 -- # val= 00:09:31.864 20:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # IFS=: 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # read -r var val 00:09:31.864 20:04:29 -- accel/accel.sh@21 -- # val= 00:09:31.864 20:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # IFS=: 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # read -r var val 00:09:31.864 20:04:29 -- accel/accel.sh@21 -- # val= 00:09:31.864 20:04:29 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # IFS=: 00:09:31.864 20:04:29 -- accel/accel.sh@20 -- # read -r var val 00:09:31.864 20:04:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:31.864 20:04:29 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:31.864 20:04:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:31.864 00:09:31.864 real 0m19.353s 00:09:31.864 user 0m6.555s 00:09:31.864 sys 0m0.453s 00:09:31.864 20:04:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.864 20:04:29 -- common/autotest_common.sh@10 -- # set +x 00:09:31.864 ************************************ 00:09:31.864 END TEST accel_xor 00:09:31.864 ************************************ 00:09:31.864 20:04:29 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:31.865 20:04:29 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:31.865 20:04:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:31.865 20:04:29 -- common/autotest_common.sh@10 -- # set +x 00:09:31.865 ************************************ 00:09:31.865 START TEST accel_dif_verify 00:09:31.865 ************************************ 00:09:31.865 20:04:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:09:31.865 20:04:29 -- accel/accel.sh@16 -- # local accel_opc 00:09:31.865 20:04:29 -- accel/accel.sh@17 -- # local accel_module 00:09:31.865 20:04:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:09:31.865 20:04:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:31.865 20:04:29 -- accel/accel.sh@12 -- # build_accel_config 00:09:31.865 20:04:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:31.865 20:04:29 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:31.865 20:04:29 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:31.865 20:04:29 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:31.865 20:04:29 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:31.865 20:04:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:31.865 20:04:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:31.865 20:04:29 -- accel/accel.sh@41 -- # local IFS=, 00:09:31.865 20:04:29 -- accel/accel.sh@42 -- # jq -r . 00:09:31.865 [2024-04-25 20:04:29.622272] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:31.865 [2024-04-25 20:04:29.622392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376398 ] 00:09:31.865 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.865 [2024-04-25 20:04:29.734938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.125 [2024-04-25 20:04:29.835182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.125 [2024-04-25 20:04:29.839906] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:32.125 [2024-04-25 20:04:29.847876] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:42.115 20:04:39 -- accel/accel.sh@18 -- # out=' 00:09:42.115 SPDK Configuration: 00:09:42.115 Core mask: 0x1 00:09:42.115 00:09:42.115 Accel Perf Configuration: 00:09:42.115 Workload Type: dif_verify 00:09:42.115 Vector size: 4096 bytes 00:09:42.115 Transfer size: 4096 bytes 00:09:42.115 Block size: 512 bytes 00:09:42.115 Metadata size: 8 bytes 00:09:42.115 Vector count 1 00:09:42.115 Module: dsa 00:09:42.115 Queue depth: 32 00:09:42.115 Allocate depth: 32 00:09:42.115 # threads/core: 1 00:09:42.115 Run time: 1 seconds 00:09:42.115 Verify: No 00:09:42.115 00:09:42.115 Running for 1 seconds... 00:09:42.115 00:09:42.115 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:42.115 ------------------------------------------------------------------------------------ 00:09:42.115 0,0 362528/s 1438 MiB/s 0 0 00:09:42.115 ==================================================================================== 00:09:42.115 Total 362528/s 1416 MiB/s 0 0' 00:09:42.115 20:04:39 -- accel/accel.sh@20 -- # IFS=: 00:09:42.115 20:04:39 -- accel/accel.sh@20 -- # read -r var val 00:09:42.115 20:04:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:42.115 20:04:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:42.115 20:04:39 -- accel/accel.sh@12 -- # build_accel_config 00:09:42.115 20:04:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:42.115 20:04:39 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:42.115 20:04:39 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:42.115 20:04:39 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:42.115 20:04:39 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:42.115 20:04:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:42.115 20:04:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:42.115 20:04:39 -- accel/accel.sh@41 -- # local IFS=, 00:09:42.115 20:04:39 -- accel/accel.sh@42 -- # jq -r . 00:09:42.115 [2024-04-25 20:04:39.306618] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:42.115 [2024-04-25 20:04:39.306747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378458 ] 00:09:42.115 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.115 [2024-04-25 20:04:39.423412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.115 [2024-04-25 20:04:39.519829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.115 [2024-04-25 20:04:39.524397] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:42.115 [2024-04-25 20:04:39.532373] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val= 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val= 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val=0x1 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val= 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val= 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val=dif_verify 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val= 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val=dsa 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@23 -- # accel_module=dsa 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.698 20:04:45 -- accel/accel.sh@21 -- # val=32 00:09:48.698 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.698 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.699 20:04:45 -- accel/accel.sh@21 -- # val=32 00:09:48.699 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.699 20:04:45 -- accel/accel.sh@21 -- # val=1 00:09:48.699 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.699 20:04:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:48.699 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.699 20:04:45 -- accel/accel.sh@21 -- # val=No 00:09:48.699 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.699 20:04:45 -- accel/accel.sh@21 -- # val= 00:09:48.699 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:48.699 20:04:45 -- accel/accel.sh@21 -- # val= 00:09:48.699 20:04:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # IFS=: 00:09:48.699 20:04:45 -- accel/accel.sh@20 -- # read -r var val 00:09:51.238 20:04:48 -- accel/accel.sh@21 -- # val= 00:09:51.238 20:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # IFS=: 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # read -r var val 00:09:51.238 20:04:48 -- accel/accel.sh@21 -- # val= 00:09:51.238 20:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # IFS=: 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # read -r var val 00:09:51.238 20:04:48 -- accel/accel.sh@21 -- # val= 00:09:51.238 20:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # IFS=: 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # read -r var val 00:09:51.238 20:04:48 -- accel/accel.sh@21 -- # val= 00:09:51.238 20:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # IFS=: 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # read -r var val 00:09:51.238 20:04:48 -- accel/accel.sh@21 -- # val= 00:09:51.238 20:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # IFS=: 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # read -r var val 00:09:51.238 20:04:48 -- accel/accel.sh@21 -- # val= 00:09:51.238 20:04:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # IFS=: 00:09:51.238 20:04:48 -- accel/accel.sh@20 -- # read -r var val 00:09:51.238 20:04:48 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:09:51.238 20:04:48 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:09:51.238 20:04:48 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:09:51.238 00:09:51.238 real 0m19.355s 00:09:51.238 user 0m6.534s 00:09:51.238 sys 0m0.468s 00:09:51.238 20:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.238 20:04:48 -- common/autotest_common.sh@10 -- # set +x 00:09:51.238 ************************************ 00:09:51.238 END TEST accel_dif_verify 00:09:51.238 ************************************ 00:09:51.238 20:04:48 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:51.238 20:04:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:51.238 20:04:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:51.238 20:04:48 -- common/autotest_common.sh@10 -- # set +x 00:09:51.238 ************************************ 00:09:51.238 START TEST accel_dif_generate 00:09:51.238 ************************************ 00:09:51.238 20:04:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:09:51.238 20:04:48 -- accel/accel.sh@16 -- # local accel_opc 00:09:51.238 20:04:48 -- accel/accel.sh@17 -- # local accel_module 00:09:51.238 20:04:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:09:51.238 20:04:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:51.238 20:04:48 -- accel/accel.sh@12 -- # build_accel_config 00:09:51.238 20:04:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:51.238 20:04:48 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:51.238 20:04:48 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:51.238 20:04:48 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:51.238 20:04:48 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:51.238 20:04:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:51.238 20:04:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:51.238 20:04:48 -- accel/accel.sh@41 -- # local IFS=, 00:09:51.238 20:04:48 -- accel/accel.sh@42 -- # jq -r . 00:09:51.238 [2024-04-25 20:04:49.009438] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:51.238 [2024-04-25 20:04:49.009561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380285 ] 00:09:51.238 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.238 [2024-04-25 20:04:49.120943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.498 [2024-04-25 20:04:49.214822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.498 [2024-04-25 20:04:49.219343] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:51.498 [2024-04-25 20:04:49.227313] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:01.534 20:04:58 -- accel/accel.sh@18 -- # out=' 00:10:01.534 SPDK Configuration: 00:10:01.534 Core mask: 0x1 00:10:01.534 00:10:01.534 Accel Perf Configuration: 00:10:01.534 Workload Type: dif_generate 00:10:01.534 Vector size: 4096 bytes 00:10:01.534 Transfer size: 4096 bytes 00:10:01.534 Block size: 512 bytes 00:10:01.534 Metadata size: 8 bytes 00:10:01.534 Vector count 1 00:10:01.534 Module: software 00:10:01.534 Queue depth: 32 00:10:01.534 Allocate depth: 32 00:10:01.534 # threads/core: 1 00:10:01.534 Run time: 1 seconds 00:10:01.534 Verify: No 00:10:01.534 00:10:01.534 Running for 1 seconds... 00:10:01.534 00:10:01.534 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:01.534 ------------------------------------------------------------------------------------ 00:10:01.534 0,0 155584/s 617 MiB/s 0 0 00:10:01.534 ==================================================================================== 00:10:01.534 Total 155584/s 607 MiB/s 0 0' 00:10:01.534 20:04:58 -- accel/accel.sh@20 -- # IFS=: 00:10:01.534 20:04:58 -- accel/accel.sh@20 -- # read -r var val 00:10:01.534 20:04:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:01.534 20:04:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:01.534 20:04:58 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.534 20:04:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.534 20:04:58 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:01.534 20:04:58 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:01.534 20:04:58 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:01.534 20:04:58 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:01.534 20:04:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.534 20:04:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.534 20:04:58 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.534 20:04:58 -- accel/accel.sh@42 -- # jq -r . 00:10:01.534 [2024-04-25 20:04:58.702969] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:01.534 [2024-04-25 20:04:58.703087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382214 ] 00:10:01.534 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.534 [2024-04-25 20:04:58.813438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.534 [2024-04-25 20:04:58.908042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.534 [2024-04-25 20:04:58.912601] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:01.534 [2024-04-25 20:04:58.920567] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val= 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val= 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val=0x1 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val= 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val= 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val=dif_generate 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val= 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val=software 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@23 -- # accel_module=software 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val=32 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val=32 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val=1 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val=No 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val= 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:08.117 20:05:05 -- accel/accel.sh@21 -- # val= 00:10:08.117 20:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # IFS=: 00:10:08.117 20:05:05 -- accel/accel.sh@20 -- # read -r var val 00:10:10.663 20:05:08 -- accel/accel.sh@21 -- # val= 00:10:10.663 20:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # IFS=: 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # read -r var val 00:10:10.663 20:05:08 -- accel/accel.sh@21 -- # val= 00:10:10.663 20:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # IFS=: 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # read -r var val 00:10:10.663 20:05:08 -- accel/accel.sh@21 -- # val= 00:10:10.663 20:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # IFS=: 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # read -r var val 00:10:10.663 20:05:08 -- accel/accel.sh@21 -- # val= 00:10:10.663 20:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # IFS=: 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # read -r var val 00:10:10.663 20:05:08 -- accel/accel.sh@21 -- # val= 00:10:10.663 20:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # IFS=: 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # read -r var val 00:10:10.663 20:05:08 -- accel/accel.sh@21 -- # val= 00:10:10.663 20:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # IFS=: 00:10:10.663 20:05:08 -- accel/accel.sh@20 -- # read -r var val 00:10:10.663 20:05:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:10.663 20:05:08 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:10.663 20:05:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:10.663 00:10:10.663 real 0m19.345s 00:10:10.663 user 0m6.532s 00:10:10.663 sys 0m0.463s 00:10:10.663 20:05:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.663 20:05:08 -- common/autotest_common.sh@10 -- # set +x 00:10:10.663 ************************************ 00:10:10.663 END TEST accel_dif_generate 00:10:10.663 ************************************ 00:10:10.663 20:05:08 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:10.663 20:05:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:10.663 20:05:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:10.663 20:05:08 -- common/autotest_common.sh@10 -- # set +x 00:10:10.663 ************************************ 00:10:10.663 START TEST accel_dif_generate_copy 00:10:10.663 ************************************ 00:10:10.663 20:05:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:10.663 20:05:08 -- accel/accel.sh@16 -- # local accel_opc 00:10:10.663 20:05:08 -- accel/accel.sh@17 -- # local accel_module 00:10:10.663 20:05:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:10.663 20:05:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:10.663 20:05:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:10.663 20:05:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:10.663 20:05:08 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:10.663 20:05:08 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:10.663 20:05:08 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:10.663 20:05:08 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:10.663 20:05:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:10.663 20:05:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:10.663 20:05:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:10.663 20:05:08 -- accel/accel.sh@42 -- # jq -r . 00:10:10.663 [2024-04-25 20:05:08.378279] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:10.663 [2024-04-25 20:05:08.378370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384182 ] 00:10:10.663 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.663 [2024-04-25 20:05:08.466140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.663 [2024-04-25 20:05:08.561656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.663 [2024-04-25 20:05:08.566214] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:10.663 [2024-04-25 20:05:08.574178] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:20.678 20:05:18 -- accel/accel.sh@18 -- # out=' 00:10:20.678 SPDK Configuration: 00:10:20.678 Core mask: 0x1 00:10:20.678 00:10:20.678 Accel Perf Configuration: 00:10:20.678 Workload Type: dif_generate_copy 00:10:20.678 Vector size: 4096 bytes 00:10:20.678 Transfer size: 4096 bytes 00:10:20.678 Vector count 1 00:10:20.678 Module: dsa 00:10:20.678 Queue depth: 32 00:10:20.678 Allocate depth: 32 00:10:20.678 # threads/core: 1 00:10:20.678 Run time: 1 seconds 00:10:20.678 Verify: No 00:10:20.678 00:10:20.678 Running for 1 seconds... 00:10:20.678 00:10:20.678 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:20.678 ------------------------------------------------------------------------------------ 00:10:20.678 0,0 339168/s 1345 MiB/s 0 0 00:10:20.678 ==================================================================================== 00:10:20.678 Total 339168/s 1324 MiB/s 0 0' 00:10:20.678 20:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:20.678 20:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:20.678 20:05:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:20.678 20:05:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:20.678 20:05:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.678 20:05:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.678 20:05:18 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:20.678 20:05:18 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:20.678 20:05:18 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:20.678 20:05:18 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:20.678 20:05:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.678 20:05:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.678 20:05:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.678 20:05:18 -- accel/accel.sh@42 -- # jq -r . 00:10:20.678 [2024-04-25 20:05:18.042273] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:20.678 [2024-04-25 20:05:18.042400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385989 ] 00:10:20.678 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.678 [2024-04-25 20:05:18.157446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.678 [2024-04-25 20:05:18.253092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.678 [2024-04-25 20:05:18.257686] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:20.678 [2024-04-25 20:05:18.265652] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:27.264 20:05:24 -- accel/accel.sh@21 -- # val= 00:10:27.264 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.264 20:05:24 -- accel/accel.sh@21 -- # val= 00:10:27.264 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.264 20:05:24 -- accel/accel.sh@21 -- # val=0x1 00:10:27.264 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.264 20:05:24 -- accel/accel.sh@21 -- # val= 00:10:27.264 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.264 20:05:24 -- accel/accel.sh@21 -- # val= 00:10:27.264 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.264 20:05:24 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:27.264 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.264 20:05:24 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.264 20:05:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:27.264 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.264 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.264 20:05:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:27.264 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.265 20:05:24 -- accel/accel.sh@21 -- # val= 00:10:27.265 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.265 20:05:24 -- accel/accel.sh@21 -- # val=dsa 00:10:27.265 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@23 -- # accel_module=dsa 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.265 20:05:24 -- accel/accel.sh@21 -- # val=32 00:10:27.265 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.265 20:05:24 -- accel/accel.sh@21 -- # val=32 00:10:27.265 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.265 20:05:24 -- accel/accel.sh@21 -- # val=1 00:10:27.265 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.265 20:05:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:27.265 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.265 20:05:24 -- accel/accel.sh@21 -- # val=No 00:10:27.265 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.265 20:05:24 -- accel/accel.sh@21 -- # val= 00:10:27.265 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:27.265 20:05:24 -- accel/accel.sh@21 -- # val= 00:10:27.265 20:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:27.265 20:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:29.809 20:05:27 -- accel/accel.sh@21 -- # val= 00:10:29.809 20:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:29.809 20:05:27 -- accel/accel.sh@21 -- # val= 00:10:29.809 20:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:29.809 20:05:27 -- accel/accel.sh@21 -- # val= 00:10:29.809 20:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:29.809 20:05:27 -- accel/accel.sh@21 -- # val= 00:10:29.809 20:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:29.809 20:05:27 -- accel/accel.sh@21 -- # val= 00:10:29.809 20:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:29.809 20:05:27 -- accel/accel.sh@21 -- # val= 00:10:29.809 20:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:29.809 20:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:29.809 20:05:27 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:10:29.809 20:05:27 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:29.809 20:05:27 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:10:29.809 00:10:29.809 real 0m19.358s 00:10:29.809 user 0m6.533s 00:10:29.809 sys 0m0.454s 00:10:29.809 20:05:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.809 20:05:27 -- common/autotest_common.sh@10 -- # set +x 00:10:29.809 ************************************ 00:10:29.809 END TEST accel_dif_generate_copy 00:10:29.809 ************************************ 00:10:29.809 20:05:27 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:29.809 20:05:27 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:29.809 20:05:27 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:29.809 20:05:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.809 20:05:27 -- common/autotest_common.sh@10 -- # set +x 00:10:30.069 ************************************ 00:10:30.069 START TEST accel_comp 00:10:30.069 ************************************ 00:10:30.069 20:05:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:30.069 20:05:27 -- accel/accel.sh@16 -- # local accel_opc 00:10:30.069 20:05:27 -- accel/accel.sh@17 -- # local accel_module 00:10:30.069 20:05:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:30.069 20:05:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:30.069 20:05:27 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.069 20:05:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.069 20:05:27 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:30.069 20:05:27 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:30.069 20:05:27 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:30.069 20:05:27 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:30.069 20:05:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.069 20:05:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.069 20:05:27 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.069 20:05:27 -- accel/accel.sh@42 -- # jq -r . 00:10:30.069 [2024-04-25 20:05:27.769797] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:30.069 [2024-04-25 20:05:27.769888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387902 ] 00:10:30.069 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.069 [2024-04-25 20:05:27.859380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.069 [2024-04-25 20:05:27.956448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.069 [2024-04-25 20:05:27.961021] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:30.069 [2024-04-25 20:05:27.968983] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:40.079 20:05:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:40.079 00:10:40.079 SPDK Configuration: 00:10:40.079 Core mask: 0x1 00:10:40.079 00:10:40.079 Accel Perf Configuration: 00:10:40.079 Workload Type: compress 00:10:40.079 Transfer size: 4096 bytes 00:10:40.079 Vector count 1 00:10:40.079 Module: iaa 00:10:40.079 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:40.079 Queue depth: 32 00:10:40.079 Allocate depth: 32 00:10:40.079 # threads/core: 1 00:10:40.079 Run time: 1 seconds 00:10:40.079 Verify: No 00:10:40.079 00:10:40.079 Running for 1 seconds... 00:10:40.079 00:10:40.079 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:40.079 ------------------------------------------------------------------------------------ 00:10:40.079 0,0 268176/s 1117 MiB/s 0 0 00:10:40.079 ==================================================================================== 00:10:40.079 Total 268176/s 1047 MiB/s 0 0' 00:10:40.079 20:05:37 -- accel/accel.sh@20 -- # IFS=: 00:10:40.079 20:05:37 -- accel/accel.sh@20 -- # read -r var val 00:10:40.079 20:05:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:40.079 20:05:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:40.079 20:05:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.079 20:05:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.079 20:05:37 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:40.079 20:05:37 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:40.079 20:05:37 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:40.079 20:05:37 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:40.079 20:05:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.079 20:05:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.079 20:05:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.079 20:05:37 -- accel/accel.sh@42 -- # jq -r . 00:10:40.079 [2024-04-25 20:05:37.434075] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:40.079 [2024-04-25 20:05:37.434209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389878 ] 00:10:40.079 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.079 [2024-04-25 20:05:37.549975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.079 [2024-04-25 20:05:37.644405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.079 [2024-04-25 20:05:37.648958] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:40.079 [2024-04-25 20:05:37.656921] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val= 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val= 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val= 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val=0x1 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val= 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val= 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val=compress 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val= 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val=iaa 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@23 -- # accel_module=iaa 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val=32 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val=32 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val=1 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val=No 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val= 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:46.714 20:05:44 -- accel/accel.sh@21 -- # val= 00:10:46.714 20:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # IFS=: 00:10:46.714 20:05:44 -- accel/accel.sh@20 -- # read -r var val 00:10:49.280 20:05:47 -- accel/accel.sh@21 -- # val= 00:10:49.280 20:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:49.280 20:05:47 -- accel/accel.sh@21 -- # val= 00:10:49.280 20:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:49.280 20:05:47 -- accel/accel.sh@21 -- # val= 00:10:49.280 20:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:49.280 20:05:47 -- accel/accel.sh@21 -- # val= 00:10:49.280 20:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:49.280 20:05:47 -- accel/accel.sh@21 -- # val= 00:10:49.280 20:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:49.280 20:05:47 -- accel/accel.sh@21 -- # val= 00:10:49.280 20:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:49.280 20:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:49.280 20:05:47 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:10:49.280 20:05:47 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:49.280 20:05:47 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:10:49.280 00:10:49.280 real 0m19.340s 00:10:49.280 user 0m6.534s 00:10:49.280 sys 0m0.446s 00:10:49.280 20:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.280 20:05:47 -- common/autotest_common.sh@10 -- # set +x 00:10:49.280 ************************************ 00:10:49.280 END TEST accel_comp 00:10:49.280 ************************************ 00:10:49.280 20:05:47 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:49.280 20:05:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:49.280 20:05:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:49.280 20:05:47 -- common/autotest_common.sh@10 -- # set +x 00:10:49.280 ************************************ 00:10:49.280 START TEST accel_decomp 00:10:49.280 ************************************ 00:10:49.280 20:05:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:49.280 20:05:47 -- accel/accel.sh@16 -- # local accel_opc 00:10:49.280 20:05:47 -- accel/accel.sh@17 -- # local accel_module 00:10:49.280 20:05:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:49.280 20:05:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:49.280 20:05:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.280 20:05:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.280 20:05:47 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:49.280 20:05:47 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:49.280 20:05:47 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:49.280 20:05:47 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:49.280 20:05:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.280 20:05:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.280 20:05:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.280 20:05:47 -- accel/accel.sh@42 -- # jq -r . 00:10:49.280 [2024-04-25 20:05:47.155748] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:49.280 [2024-04-25 20:05:47.155879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391757 ] 00:10:49.541 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.541 [2024-04-25 20:05:47.274567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.541 [2024-04-25 20:05:47.370944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.541 [2024-04-25 20:05:47.375515] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:49.541 [2024-04-25 20:05:47.383467] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:59.531 20:05:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:59.531 00:10:59.531 SPDK Configuration: 00:10:59.531 Core mask: 0x1 00:10:59.531 00:10:59.531 Accel Perf Configuration: 00:10:59.531 Workload Type: decompress 00:10:59.531 Transfer size: 4096 bytes 00:10:59.531 Vector count 1 00:10:59.531 Module: iaa 00:10:59.531 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:59.531 Queue depth: 32 00:10:59.531 Allocate depth: 32 00:10:59.531 # threads/core: 1 00:10:59.531 Run time: 1 seconds 00:10:59.531 Verify: Yes 00:10:59.531 00:10:59.531 Running for 1 seconds... 00:10:59.531 00:10:59.531 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:59.531 ------------------------------------------------------------------------------------ 00:10:59.531 0,0 288976/s 655 MiB/s 0 0 00:10:59.531 ==================================================================================== 00:10:59.531 Total 288976/s 1128 MiB/s 0 0' 00:10:59.531 20:05:56 -- accel/accel.sh@20 -- # IFS=: 00:10:59.531 20:05:56 -- accel/accel.sh@20 -- # read -r var val 00:10:59.531 20:05:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:59.531 20:05:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:59.531 20:05:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:59.531 20:05:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:59.531 20:05:56 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:59.531 20:05:56 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:59.531 20:05:56 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:59.531 20:05:56 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:59.531 20:05:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:59.531 20:05:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:59.531 20:05:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:59.531 20:05:56 -- accel/accel.sh@42 -- # jq -r . 00:10:59.531 [2024-04-25 20:05:56.866056] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:59.531 [2024-04-25 20:05:56.866183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393778 ] 00:10:59.531 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.531 [2024-04-25 20:05:56.982567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.531 [2024-04-25 20:05:57.077643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.531 [2024-04-25 20:05:57.082197] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:59.531 [2024-04-25 20:05:57.090161] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val= 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val= 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val= 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val=0x1 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val= 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val= 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val=decompress 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val= 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val=iaa 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val=32 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val=32 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val=1 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val=Yes 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val= 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:06.117 20:06:03 -- accel/accel.sh@21 -- # val= 00:11:06.117 20:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # IFS=: 00:11:06.117 20:06:03 -- accel/accel.sh@20 -- # read -r var val 00:11:08.663 20:06:06 -- accel/accel.sh@21 -- # val= 00:11:08.663 20:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # IFS=: 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # read -r var val 00:11:08.663 20:06:06 -- accel/accel.sh@21 -- # val= 00:11:08.663 20:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # IFS=: 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # read -r var val 00:11:08.663 20:06:06 -- accel/accel.sh@21 -- # val= 00:11:08.663 20:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # IFS=: 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # read -r var val 00:11:08.663 20:06:06 -- accel/accel.sh@21 -- # val= 00:11:08.663 20:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # IFS=: 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # read -r var val 00:11:08.663 20:06:06 -- accel/accel.sh@21 -- # val= 00:11:08.663 20:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # IFS=: 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # read -r var val 00:11:08.663 20:06:06 -- accel/accel.sh@21 -- # val= 00:11:08.663 20:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # IFS=: 00:11:08.663 20:06:06 -- accel/accel.sh@20 -- # read -r var val 00:11:08.663 20:06:06 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:08.663 20:06:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:08.663 20:06:06 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:08.663 00:11:08.663 real 0m19.394s 00:11:08.663 user 0m6.532s 00:11:08.663 sys 0m0.497s 00:11:08.663 20:06:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.663 20:06:06 -- common/autotest_common.sh@10 -- # set +x 00:11:08.663 ************************************ 00:11:08.663 END TEST accel_decomp 00:11:08.663 ************************************ 00:11:08.663 20:06:06 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:08.663 20:06:06 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:08.663 20:06:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:08.663 20:06:06 -- common/autotest_common.sh@10 -- # set +x 00:11:08.663 ************************************ 00:11:08.663 START TEST accel_decmop_full 00:11:08.663 ************************************ 00:11:08.663 20:06:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:08.663 20:06:06 -- accel/accel.sh@16 -- # local accel_opc 00:11:08.663 20:06:06 -- accel/accel.sh@17 -- # local accel_module 00:11:08.663 20:06:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:08.663 20:06:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:08.663 20:06:06 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.663 20:06:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.663 20:06:06 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:08.663 20:06:06 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:08.663 20:06:06 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:08.663 20:06:06 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:08.663 20:06:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.663 20:06:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.663 20:06:06 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.663 20:06:06 -- accel/accel.sh@42 -- # jq -r . 00:11:08.663 [2024-04-25 20:06:06.568524] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:08.663 [2024-04-25 20:06:06.568620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396131 ] 00:11:08.924 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.924 [2024-04-25 20:06:06.657744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.924 [2024-04-25 20:06:06.752780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.924 [2024-04-25 20:06:06.757343] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:08.924 [2024-04-25 20:06:06.765308] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:18.921 20:06:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:18.921 00:11:18.921 SPDK Configuration: 00:11:18.921 Core mask: 0x1 00:11:18.921 00:11:18.921 Accel Perf Configuration: 00:11:18.921 Workload Type: decompress 00:11:18.921 Transfer size: 111250 bytes 00:11:18.921 Vector count 1 00:11:18.921 Module: iaa 00:11:18.921 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:18.921 Queue depth: 32 00:11:18.921 Allocate depth: 32 00:11:18.921 # threads/core: 1 00:11:18.921 Run time: 1 seconds 00:11:18.921 Verify: Yes 00:11:18.921 00:11:18.922 Running for 1 seconds... 00:11:18.922 00:11:18.922 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:18.922 ------------------------------------------------------------------------------------ 00:11:18.922 0,0 109266/s 6160 MiB/s 0 0 00:11:18.922 ==================================================================================== 00:11:18.922 Total 109266/s 11592 MiB/s 0 0' 00:11:18.922 20:06:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.922 20:06:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.922 20:06:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:18.922 20:06:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:18.922 20:06:16 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.922 20:06:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.922 20:06:16 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:18.922 20:06:16 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:18.922 20:06:16 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:18.922 20:06:16 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:18.922 20:06:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.922 20:06:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.922 20:06:16 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.922 20:06:16 -- accel/accel.sh@42 -- # jq -r . 00:11:18.922 [2024-04-25 20:06:16.258079] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:18.922 [2024-04-25 20:06:16.258210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398050 ] 00:11:18.922 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.922 [2024-04-25 20:06:16.374834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.922 [2024-04-25 20:06:16.471482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.922 [2024-04-25 20:06:16.476069] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:18.922 [2024-04-25 20:06:16.484038] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val= 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val= 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val= 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val=0x1 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val= 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val= 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val=decompress 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val= 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val=iaa 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val=32 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val=32 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val=1 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val=Yes 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val= 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:25.511 20:06:22 -- accel/accel.sh@21 -- # val= 00:11:25.511 20:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # IFS=: 00:11:25.511 20:06:22 -- accel/accel.sh@20 -- # read -r var val 00:11:28.049 20:06:25 -- accel/accel.sh@21 -- # val= 00:11:28.049 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:11:28.049 20:06:25 -- accel/accel.sh@21 -- # val= 00:11:28.049 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:11:28.049 20:06:25 -- accel/accel.sh@21 -- # val= 00:11:28.049 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:11:28.049 20:06:25 -- accel/accel.sh@21 -- # val= 00:11:28.049 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:11:28.049 20:06:25 -- accel/accel.sh@21 -- # val= 00:11:28.049 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:11:28.049 20:06:25 -- accel/accel.sh@21 -- # val= 00:11:28.049 20:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # IFS=: 00:11:28.049 20:06:25 -- accel/accel.sh@20 -- # read -r var val 00:11:28.049 20:06:25 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:28.049 20:06:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:28.049 20:06:25 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:28.049 00:11:28.049 real 0m19.376s 00:11:28.049 user 0m6.530s 00:11:28.049 sys 0m0.479s 00:11:28.049 20:06:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.049 20:06:25 -- common/autotest_common.sh@10 -- # set +x 00:11:28.049 ************************************ 00:11:28.049 END TEST accel_decmop_full 00:11:28.049 ************************************ 00:11:28.049 20:06:25 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:28.049 20:06:25 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:28.049 20:06:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.049 20:06:25 -- common/autotest_common.sh@10 -- # set +x 00:11:28.049 ************************************ 00:11:28.049 START TEST accel_decomp_mcore 00:11:28.049 ************************************ 00:11:28.049 20:06:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:28.049 20:06:25 -- accel/accel.sh@16 -- # local accel_opc 00:11:28.049 20:06:25 -- accel/accel.sh@17 -- # local accel_module 00:11:28.049 20:06:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:28.049 20:06:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:28.049 20:06:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:28.049 20:06:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:28.049 20:06:25 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:28.049 20:06:25 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:28.049 20:06:25 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:28.049 20:06:25 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:28.049 20:06:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:28.049 20:06:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:28.049 20:06:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:28.049 20:06:25 -- accel/accel.sh@42 -- # jq -r . 00:11:28.049 [2024-04-25 20:06:25.975276] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:28.050 [2024-04-25 20:06:25.975368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400051 ] 00:11:28.310 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.310 [2024-04-25 20:06:26.063915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.310 [2024-04-25 20:06:26.162534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.310 [2024-04-25 20:06:26.162614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.310 [2024-04-25 20:06:26.162887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.310 [2024-04-25 20:06:26.162900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.310 [2024-04-25 20:06:26.167499] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:28.310 [2024-04-25 20:06:26.175454] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:38.376 20:06:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:38.376 00:11:38.376 SPDK Configuration: 00:11:38.376 Core mask: 0xf 00:11:38.376 00:11:38.376 Accel Perf Configuration: 00:11:38.376 Workload Type: decompress 00:11:38.376 Transfer size: 4096 bytes 00:11:38.376 Vector count 1 00:11:38.376 Module: iaa 00:11:38.376 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:38.376 Queue depth: 32 00:11:38.376 Allocate depth: 32 00:11:38.376 # threads/core: 1 00:11:38.376 Run time: 1 seconds 00:11:38.376 Verify: Yes 00:11:38.376 00:11:38.376 Running for 1 seconds... 00:11:38.376 00:11:38.376 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:38.376 ------------------------------------------------------------------------------------ 00:11:38.376 0,0 110128/s 249 MiB/s 0 0 00:11:38.376 3,0 113632/s 257 MiB/s 0 0 00:11:38.376 2,0 112448/s 255 MiB/s 0 0 00:11:38.376 1,0 112544/s 255 MiB/s 0 0 00:11:38.376 ==================================================================================== 00:11:38.376 Total 448752/s 1752 MiB/s 0 0' 00:11:38.376 20:06:35 -- accel/accel.sh@20 -- # IFS=: 00:11:38.376 20:06:35 -- accel/accel.sh@20 -- # read -r var val 00:11:38.376 20:06:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:38.376 20:06:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:38.376 20:06:35 -- accel/accel.sh@12 -- # build_accel_config 00:11:38.376 20:06:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:38.376 20:06:35 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:38.376 20:06:35 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:38.376 20:06:35 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:38.376 20:06:35 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:38.376 20:06:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:38.376 20:06:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:38.376 20:06:35 -- accel/accel.sh@41 -- # local IFS=, 00:11:38.376 20:06:35 -- accel/accel.sh@42 -- # jq -r . 00:11:38.376 [2024-04-25 20:06:35.653360] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:38.376 [2024-04-25 20:06:35.653488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401917 ] 00:11:38.376 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.376 [2024-04-25 20:06:35.773749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.376 [2024-04-25 20:06:35.877810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.376 [2024-04-25 20:06:35.877850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.376 [2024-04-25 20:06:35.877866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.376 [2024-04-25 20:06:35.877870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.376 [2024-04-25 20:06:35.882531] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:38.376 [2024-04-25 20:06:35.890485] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val= 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val= 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val= 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val=0xf 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val= 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val= 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val=decompress 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val= 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val=iaa 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val=32 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val=32 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val=1 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val=Yes 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val= 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:44.953 20:06:42 -- accel/accel.sh@21 -- # val= 00:11:44.953 20:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # IFS=: 00:11:44.953 20:06:42 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@21 -- # val= 00:11:47.502 20:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@21 -- # val= 00:11:47.502 20:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@21 -- # val= 00:11:47.502 20:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@21 -- # val= 00:11:47.502 20:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@21 -- # val= 00:11:47.502 20:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@21 -- # val= 00:11:47.502 20:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@21 -- # val= 00:11:47.502 20:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@21 -- # val= 00:11:47.502 20:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@21 -- # val= 00:11:47.502 20:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # IFS=: 00:11:47.502 20:06:45 -- accel/accel.sh@20 -- # read -r var val 00:11:47.502 20:06:45 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:47.502 20:06:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:47.502 20:06:45 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:47.503 00:11:47.503 real 0m19.397s 00:11:47.503 user 1m2.138s 00:11:47.503 sys 0m0.501s 00:11:47.503 20:06:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.503 20:06:45 -- common/autotest_common.sh@10 -- # set +x 00:11:47.503 ************************************ 00:11:47.503 END TEST accel_decomp_mcore 00:11:47.503 ************************************ 00:11:47.503 20:06:45 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:47.503 20:06:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:47.503 20:06:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.503 20:06:45 -- common/autotest_common.sh@10 -- # set +x 00:11:47.503 ************************************ 00:11:47.503 START TEST accel_decomp_full_mcore 00:11:47.503 ************************************ 00:11:47.503 20:06:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:47.503 20:06:45 -- accel/accel.sh@16 -- # local accel_opc 00:11:47.503 20:06:45 -- accel/accel.sh@17 -- # local accel_module 00:11:47.503 20:06:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:47.503 20:06:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:47.503 20:06:45 -- accel/accel.sh@12 -- # build_accel_config 00:11:47.503 20:06:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:47.503 20:06:45 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:47.503 20:06:45 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:47.503 20:06:45 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:47.503 20:06:45 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:47.503 20:06:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:47.503 20:06:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:47.503 20:06:45 -- accel/accel.sh@41 -- # local IFS=, 00:11:47.503 20:06:45 -- accel/accel.sh@42 -- # jq -r . 00:11:47.503 [2024-04-25 20:06:45.417004] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:47.503 [2024-04-25 20:06:45.417119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403957 ] 00:11:47.763 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.763 [2024-04-25 20:06:45.529335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.763 [2024-04-25 20:06:45.626650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.763 [2024-04-25 20:06:45.626748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.763 [2024-04-25 20:06:45.626846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.763 [2024-04-25 20:06:45.626858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.763 [2024-04-25 20:06:45.631421] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:47.763 [2024-04-25 20:06:45.639387] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:57.746 20:06:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:57.746 00:11:57.746 SPDK Configuration: 00:11:57.746 Core mask: 0xf 00:11:57.746 00:11:57.746 Accel Perf Configuration: 00:11:57.746 Workload Type: decompress 00:11:57.746 Transfer size: 111250 bytes 00:11:57.746 Vector count 1 00:11:57.746 Module: iaa 00:11:57.746 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:57.746 Queue depth: 32 00:11:57.746 Allocate depth: 32 00:11:57.746 # threads/core: 1 00:11:57.746 Run time: 1 seconds 00:11:57.746 Verify: Yes 00:11:57.746 00:11:57.746 Running for 1 seconds... 00:11:57.746 00:11:57.746 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:57.746 ------------------------------------------------------------------------------------ 00:11:57.746 0,0 82512/s 4651 MiB/s 0 0 00:11:57.746 3,0 86096/s 4853 MiB/s 0 0 00:11:57.746 2,0 85826/s 4838 MiB/s 0 0 00:11:57.746 1,0 85024/s 4793 MiB/s 0 0 00:11:57.746 ==================================================================================== 00:11:57.746 Total 339458/s 36015 MiB/s 0 0' 00:11:57.746 20:06:55 -- accel/accel.sh@20 -- # IFS=: 00:11:57.746 20:06:55 -- accel/accel.sh@20 -- # read -r var val 00:11:57.746 20:06:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.746 20:06:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:57.746 20:06:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:57.746 20:06:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:57.746 20:06:55 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:57.746 20:06:55 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:57.746 20:06:55 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:57.746 20:06:55 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:57.746 20:06:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:57.746 20:06:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:57.746 20:06:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:57.746 20:06:55 -- accel/accel.sh@42 -- # jq -r . 00:11:57.746 [2024-04-25 20:06:55.160951] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:57.746 [2024-04-25 20:06:55.161078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405755 ] 00:11:57.746 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.746 [2024-04-25 20:06:55.278230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.746 [2024-04-25 20:06:55.377975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.746 [2024-04-25 20:06:55.378087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.746 [2024-04-25 20:06:55.378184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.746 [2024-04-25 20:06:55.378195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.746 [2024-04-25 20:06:55.382789] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:57.746 [2024-04-25 20:06:55.390754] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val= 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val= 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val= 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val=0xf 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val= 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val= 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val=decompress 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val= 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val=iaa 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val=32 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val=32 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val=1 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val=Yes 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val= 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:04.326 20:07:01 -- accel/accel.sh@21 -- # val= 00:12:04.326 20:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # IFS=: 00:12:04.326 20:07:01 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@21 -- # val= 00:12:07.626 20:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # IFS=: 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@21 -- # val= 00:12:07.626 20:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # IFS=: 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@21 -- # val= 00:12:07.626 20:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # IFS=: 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@21 -- # val= 00:12:07.626 20:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # IFS=: 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@21 -- # val= 00:12:07.626 20:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # IFS=: 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@21 -- # val= 00:12:07.626 20:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # IFS=: 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@21 -- # val= 00:12:07.626 20:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # IFS=: 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@21 -- # val= 00:12:07.626 20:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # IFS=: 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@21 -- # val= 00:12:07.626 20:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # IFS=: 00:12:07.626 20:07:04 -- accel/accel.sh@20 -- # read -r var val 00:12:07.626 20:07:04 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:07.626 20:07:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:07.626 20:07:04 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:07.626 00:12:07.626 real 0m19.490s 00:12:07.626 user 1m2.413s 00:12:07.626 sys 0m0.466s 00:12:07.626 20:07:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.626 20:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:07.626 ************************************ 00:12:07.626 END TEST accel_decomp_full_mcore 00:12:07.626 ************************************ 00:12:07.626 20:07:04 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:07.626 20:07:04 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:07.626 20:07:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.626 20:07:04 -- common/autotest_common.sh@10 -- # set +x 00:12:07.626 ************************************ 00:12:07.626 START TEST accel_decomp_mthread 00:12:07.626 ************************************ 00:12:07.626 20:07:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:07.626 20:07:04 -- accel/accel.sh@16 -- # local accel_opc 00:12:07.626 20:07:04 -- accel/accel.sh@17 -- # local accel_module 00:12:07.626 20:07:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:07.626 20:07:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:07.626 20:07:04 -- accel/accel.sh@12 -- # build_accel_config 00:12:07.626 20:07:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:07.626 20:07:04 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:07.626 20:07:04 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:07.626 20:07:04 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:07.626 20:07:04 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:07.626 20:07:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:07.626 20:07:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:07.626 20:07:04 -- accel/accel.sh@41 -- # local IFS=, 00:12:07.626 20:07:04 -- accel/accel.sh@42 -- # jq -r . 00:12:07.626 [2024-04-25 20:07:04.932942] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:07.626 [2024-04-25 20:07:04.933034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407854 ] 00:12:07.626 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.626 [2024-04-25 20:07:05.020866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.626 [2024-04-25 20:07:05.115589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.626 [2024-04-25 20:07:05.120139] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:07.626 [2024-04-25 20:07:05.128104] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:17.623 20:07:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:17.623 00:12:17.623 SPDK Configuration: 00:12:17.623 Core mask: 0x1 00:12:17.623 00:12:17.623 Accel Perf Configuration: 00:12:17.623 Workload Type: decompress 00:12:17.623 Transfer size: 4096 bytes 00:12:17.623 Vector count 1 00:12:17.623 Module: iaa 00:12:17.623 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:17.623 Queue depth: 32 00:12:17.623 Allocate depth: 32 00:12:17.623 # threads/core: 2 00:12:17.623 Run time: 1 seconds 00:12:17.623 Verify: Yes 00:12:17.623 00:12:17.623 Running for 1 seconds... 00:12:17.623 00:12:17.623 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:17.623 ------------------------------------------------------------------------------------ 00:12:17.623 0,1 147376/s 334 MiB/s 0 0 00:12:17.623 0,0 145984/s 331 MiB/s 0 0 00:12:17.623 ==================================================================================== 00:12:17.623 Total 293360/s 1145 MiB/s 0 0' 00:12:17.623 20:07:14 -- accel/accel.sh@20 -- # IFS=: 00:12:17.623 20:07:14 -- accel/accel.sh@20 -- # read -r var val 00:12:17.623 20:07:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:17.623 20:07:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:17.623 20:07:14 -- accel/accel.sh@12 -- # build_accel_config 00:12:17.623 20:07:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:17.623 20:07:14 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:17.623 20:07:14 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:17.623 20:07:14 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:17.623 20:07:14 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:17.623 20:07:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:17.623 20:07:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:17.623 20:07:14 -- accel/accel.sh@41 -- # local IFS=, 00:12:17.623 20:07:14 -- accel/accel.sh@42 -- # jq -r . 00:12:17.623 [2024-04-25 20:07:14.580850] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:17.623 [2024-04-25 20:07:14.580971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409651 ] 00:12:17.623 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.623 [2024-04-25 20:07:14.691660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.623 [2024-04-25 20:07:14.786807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.623 [2024-04-25 20:07:14.791356] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:17.623 [2024-04-25 20:07:14.799320] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val= 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val= 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val= 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val=0x1 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val= 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val= 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val=decompress 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val= 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val=iaa 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val=32 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val=32 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val=2 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val=Yes 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val= 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:24.294 20:07:21 -- accel/accel.sh@21 -- # val= 00:12:24.294 20:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # IFS=: 00:12:24.294 20:07:21 -- accel/accel.sh@20 -- # read -r var val 00:12:26.833 20:07:24 -- accel/accel.sh@21 -- # val= 00:12:26.833 20:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # IFS=: 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # read -r var val 00:12:26.833 20:07:24 -- accel/accel.sh@21 -- # val= 00:12:26.833 20:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # IFS=: 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # read -r var val 00:12:26.833 20:07:24 -- accel/accel.sh@21 -- # val= 00:12:26.833 20:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # IFS=: 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # read -r var val 00:12:26.833 20:07:24 -- accel/accel.sh@21 -- # val= 00:12:26.833 20:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # IFS=: 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # read -r var val 00:12:26.833 20:07:24 -- accel/accel.sh@21 -- # val= 00:12:26.833 20:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # IFS=: 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # read -r var val 00:12:26.833 20:07:24 -- accel/accel.sh@21 -- # val= 00:12:26.833 20:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # IFS=: 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # read -r var val 00:12:26.833 20:07:24 -- accel/accel.sh@21 -- # val= 00:12:26.833 20:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # IFS=: 00:12:26.833 20:07:24 -- accel/accel.sh@20 -- # read -r var val 00:12:26.833 20:07:24 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:26.833 20:07:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:26.833 20:07:24 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:26.833 00:12:26.833 real 0m19.343s 00:12:26.833 user 0m6.516s 00:12:26.833 sys 0m0.467s 00:12:26.833 20:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.833 20:07:24 -- common/autotest_common.sh@10 -- # set +x 00:12:26.833 ************************************ 00:12:26.833 END TEST accel_decomp_mthread 00:12:26.833 ************************************ 00:12:26.833 20:07:24 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:26.833 20:07:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:26.833 20:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:26.833 20:07:24 -- common/autotest_common.sh@10 -- # set +x 00:12:26.833 ************************************ 00:12:26.833 START TEST accel_deomp_full_mthread 00:12:26.833 ************************************ 00:12:26.833 20:07:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:26.833 20:07:24 -- accel/accel.sh@16 -- # local accel_opc 00:12:26.833 20:07:24 -- accel/accel.sh@17 -- # local accel_module 00:12:26.833 20:07:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:26.833 20:07:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:26.833 20:07:24 -- accel/accel.sh@12 -- # build_accel_config 00:12:26.833 20:07:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:26.833 20:07:24 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:26.833 20:07:24 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:26.834 20:07:24 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:26.834 20:07:24 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:26.834 20:07:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:26.834 20:07:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:26.834 20:07:24 -- accel/accel.sh@41 -- # local IFS=, 00:12:26.834 20:07:24 -- accel/accel.sh@42 -- # jq -r . 00:12:26.834 [2024-04-25 20:07:24.302060] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:26.834 [2024-04-25 20:07:24.302153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411566 ] 00:12:26.834 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.834 [2024-04-25 20:07:24.390845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.834 [2024-04-25 20:07:24.487982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.834 [2024-04-25 20:07:24.492616] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:26.834 [2024-04-25 20:07:24.500580] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:36.828 20:07:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:36.828 00:12:36.828 SPDK Configuration: 00:12:36.828 Core mask: 0x1 00:12:36.828 00:12:36.828 Accel Perf Configuration: 00:12:36.828 Workload Type: decompress 00:12:36.828 Transfer size: 111250 bytes 00:12:36.828 Vector count 1 00:12:36.828 Module: iaa 00:12:36.828 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:36.828 Queue depth: 32 00:12:36.828 Allocate depth: 32 00:12:36.828 # threads/core: 2 00:12:36.828 Run time: 1 seconds 00:12:36.828 Verify: Yes 00:12:36.828 00:12:36.828 Running for 1 seconds... 00:12:36.828 00:12:36.828 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:36.828 ------------------------------------------------------------------------------------ 00:12:36.828 0,1 60816/s 3428 MiB/s 0 0 00:12:36.828 0,0 60336/s 3401 MiB/s 0 0 00:12:36.828 ==================================================================================== 00:12:36.828 Total 121152/s 12853 MiB/s 0 0' 00:12:36.828 20:07:33 -- accel/accel.sh@20 -- # IFS=: 00:12:36.828 20:07:33 -- accel/accel.sh@20 -- # read -r var val 00:12:36.828 20:07:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:36.828 20:07:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:36.828 20:07:33 -- accel/accel.sh@12 -- # build_accel_config 00:12:36.828 20:07:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:36.828 20:07:33 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:36.828 20:07:33 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:36.828 20:07:33 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:36.828 20:07:33 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:36.828 20:07:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:36.828 20:07:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:36.828 20:07:33 -- accel/accel.sh@41 -- # local IFS=, 00:12:36.828 20:07:33 -- accel/accel.sh@42 -- # jq -r . 00:12:36.828 [2024-04-25 20:07:33.986546] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:36.828 [2024-04-25 20:07:33.986663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413550 ] 00:12:36.828 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.828 [2024-04-25 20:07:34.097266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.828 [2024-04-25 20:07:34.191598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.828 [2024-04-25 20:07:34.196102] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:36.828 [2024-04-25 20:07:34.204069] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val= 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val= 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val= 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val=0x1 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val= 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val= 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val=decompress 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val= 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val=iaa 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val=32 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val=32 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val=2 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val=Yes 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val= 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:43.406 20:07:40 -- accel/accel.sh@21 -- # val= 00:12:43.406 20:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # IFS=: 00:12:43.406 20:07:40 -- accel/accel.sh@20 -- # read -r var val 00:12:45.946 20:07:43 -- accel/accel.sh@21 -- # val= 00:12:45.946 20:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # IFS=: 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # read -r var val 00:12:45.946 20:07:43 -- accel/accel.sh@21 -- # val= 00:12:45.946 20:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # IFS=: 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # read -r var val 00:12:45.946 20:07:43 -- accel/accel.sh@21 -- # val= 00:12:45.946 20:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # IFS=: 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # read -r var val 00:12:45.946 20:07:43 -- accel/accel.sh@21 -- # val= 00:12:45.946 20:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # IFS=: 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # read -r var val 00:12:45.946 20:07:43 -- accel/accel.sh@21 -- # val= 00:12:45.946 20:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # IFS=: 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # read -r var val 00:12:45.946 20:07:43 -- accel/accel.sh@21 -- # val= 00:12:45.946 20:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # IFS=: 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # read -r var val 00:12:45.946 20:07:43 -- accel/accel.sh@21 -- # val= 00:12:45.946 20:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # IFS=: 00:12:45.946 20:07:43 -- accel/accel.sh@20 -- # read -r var val 00:12:45.946 20:07:43 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:45.946 20:07:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:45.946 20:07:43 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:45.946 00:12:45.946 real 0m19.369s 00:12:45.946 user 0m6.570s 00:12:45.946 sys 0m0.430s 00:12:45.946 20:07:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.946 20:07:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.946 ************************************ 00:12:45.946 END TEST accel_deomp_full_mthread 00:12:45.946 ************************************ 00:12:45.946 20:07:43 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:45.946 20:07:43 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:45.946 20:07:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:45.946 20:07:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.946 20:07:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.946 20:07:43 -- accel/accel.sh@129 -- # build_accel_config 00:12:45.946 20:07:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:45.946 20:07:43 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:45.946 20:07:43 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:45.946 20:07:43 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:45.946 20:07:43 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:45.946 20:07:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:45.946 20:07:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:45.946 20:07:43 -- accel/accel.sh@41 -- # local IFS=, 00:12:45.946 20:07:43 -- accel/accel.sh@42 -- # jq -r . 00:12:45.946 ************************************ 00:12:45.946 START TEST accel_dif_functional_tests 00:12:45.946 ************************************ 00:12:45.946 20:07:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:45.946 [2024-04-25 20:07:43.719093] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:45.946 [2024-04-25 20:07:43.719173] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415430 ] 00:12:45.946 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.946 [2024-04-25 20:07:43.803859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:46.206 [2024-04-25 20:07:43.900294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.206 [2024-04-25 20:07:43.900304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.206 [2024-04-25 20:07:43.900304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.206 [2024-04-25 20:07:43.904913] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:46.206 [2024-04-25 20:07:43.912888] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:54.339 00:12:54.339 00:12:54.339 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.339 http://cunit.sourceforge.net/ 00:12:54.339 00:12:54.339 00:12:54.339 Suite: accel_dif 00:12:54.339 Test: verify: DIF generated, GUARD check ...passed 00:12:54.339 Test: verify: DIF generated, APPTAG check ...passed 00:12:54.339 Test: verify: DIF generated, REFTAG check ...passed 00:12:54.339 Test: verify: DIF not generated, GUARD check ...[2024-04-25 20:07:50.834055] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:54.339 [2024-04-25 20:07:50.834106] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-25 20:07:50.834118] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834127] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834133] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834141] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834148] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:54.339 [2024-04-25 20:07:50.834156] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:54.339 [2024-04-25 20:07:50.834163] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:54.339 [2024-04-25 20:07:50.834186] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:54.339 [2024-04-25 20:07:50.834195] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:12:54.339 [2024-04-25 20:07:50.834221] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:54.339 passed 00:12:54.339 Test: verify: DIF not generated, APPTAG check ...[2024-04-25 20:07:50.834281] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:54.339 [2024-04-25 20:07:50.834291] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-25 20:07:50.834301] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834308] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834316] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834323] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834331] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:54.339 [2024-04-25 20:07:50.834337] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:54.339 [2024-04-25 20:07:50.834344] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:54.339 [2024-04-25 20:07:50.834352] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:54.339 [2024-04-25 20:07:50.834361] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:12:54.339 [2024-04-25 20:07:50.834378] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:54.339 passed 00:12:54.339 Test: verify: DIF not generated, REFTAG check ...[2024-04-25 20:07:50.834419] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:54.339 [2024-04-25 20:07:50.834431] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-25 20:07:50.834437] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834445] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834451] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834459] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834466] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:54.339 [2024-04-25 20:07:50.834478] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:54.339 [2024-04-25 20:07:50.834486] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:54.339 [2024-04-25 20:07:50.834511] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:54.339 [2024-04-25 20:07:50.834518] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:12:54.339 [2024-04-25 20:07:50.834542] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:54.339 passed 00:12:54.339 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:54.339 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-25 20:07:50.834618] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:54.339 [2024-04-25 20:07:50.834627] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-25 20:07:50.834635] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834641] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834649] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834655] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834663] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:54.339 [2024-04-25 20:07:50.834669] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:54.339 [2024-04-25 20:07:50.834678] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:54.339 [2024-04-25 20:07:50.834686] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:54.339 [2024-04-25 20:07:50.834695] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:12:54.339 passed 00:12:54.339 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:54.339 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:54.339 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:54.339 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-25 20:07:50.834870] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:54.339 [2024-04-25 20:07:50.834882] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-25 20:07:50.834888] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834896] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834902] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834910] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834921] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:54.339 [2024-04-25 20:07:50.834929] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:54.339 [2024-04-25 20:07:50.834934] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:54.339 [2024-04-25 20:07:50.834942] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:54.339 [2024-04-25 20:07:50.834949] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-25 20:07:50.834957] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834963] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834971] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834977] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.834984] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:54.339 [2024-04-25 20:07:50.834990] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:54.339 [2024-04-25 20:07:50.834999] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:54.339 [2024-04-25 20:07:50.835007] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:54.339 [2024-04-25 20:07:50.835016] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:12:54.339 [2024-04-25 20:07:50.835025] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:12:54.339 passed[2024-04-25 20:07:50.835034] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw: 00:12:54.339 Test: generate copy: DIF generated, GUARD check ...[2024-04-25 20:07:50.835041] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.835049] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.835056] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.835065] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:54.339 [2024-04-25 20:07:50.835074] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:54.339 [2024-04-25 20:07:50.835081] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:54.339 [2024-04-25 20:07:50.835087] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:54.339 passed 00:12:54.339 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:54.339 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:54.340 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-04-25 20:07:50.835238] idxd.c:1571:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:12:54.340 passed 00:12:54.340 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-04-25 20:07:50.835276] idxd.c:1576:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:12:54.340 passed 00:12:54.340 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-04-25 20:07:50.835312] idxd.c:1581:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:12:54.340 passed 00:12:54.340 Test: generate copy: iovecs-len validate ...[2024-04-25 20:07:50.835350] idxd.c:1608:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:12:54.340 passed 00:12:54.340 Test: generate copy: buffer alignment validate ...passed 00:12:54.340 00:12:54.340 Run Summary: Type Total Ran Passed Failed Inactive 00:12:54.340 suites 1 1 n/a 0 0 00:12:54.340 tests 20 20 20 0 0 00:12:54.340 asserts 204 204 204 0 n/a 00:12:54.340 00:12:54.340 Elapsed time = 0.003 seconds 00:12:55.279 00:12:55.279 real 0m9.517s 00:12:55.279 user 0m20.110s 00:12:55.279 sys 0m0.249s 00:12:55.279 20:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.279 20:07:53 -- common/autotest_common.sh@10 -- # set +x 00:12:55.279 ************************************ 00:12:55.279 END TEST accel_dif_functional_tests 00:12:55.279 ************************************ 00:12:55.539 00:12:55.539 real 7m6.306s 00:12:55.539 user 4m32.500s 00:12:55.539 sys 0m11.735s 00:12:55.539 20:07:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.539 20:07:53 -- common/autotest_common.sh@10 -- # set +x 00:12:55.539 ************************************ 00:12:55.539 END TEST accel 00:12:55.539 ************************************ 00:12:55.539 20:07:53 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:12:55.539 20:07:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:55.539 20:07:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:55.539 20:07:53 -- common/autotest_common.sh@10 -- # set +x 00:12:55.539 ************************************ 00:12:55.539 START TEST accel_rpc 00:12:55.539 ************************************ 00:12:55.539 20:07:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:12:55.539 * Looking for test storage... 00:12:55.539 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:12:55.539 20:07:53 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:55.539 20:07:53 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1417495 00:12:55.539 20:07:53 -- accel/accel_rpc.sh@15 -- # waitforlisten 1417495 00:12:55.539 20:07:53 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:55.539 20:07:53 -- common/autotest_common.sh@819 -- # '[' -z 1417495 ']' 00:12:55.539 20:07:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.539 20:07:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:55.539 20:07:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.539 20:07:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:55.539 20:07:53 -- common/autotest_common.sh@10 -- # set +x 00:12:55.539 [2024-04-25 20:07:53.408504] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:55.539 [2024-04-25 20:07:53.408629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417495 ] 00:12:55.800 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.800 [2024-04-25 20:07:53.535693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.800 [2024-04-25 20:07:53.631906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:55.800 [2024-04-25 20:07:53.632127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.371 20:07:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:56.371 20:07:54 -- common/autotest_common.sh@852 -- # return 0 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:12:56.371 20:07:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:56.371 20:07:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.371 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.371 ************************************ 00:12:56.371 START TEST accel_scan_dsa_modules 00:12:56.371 ************************************ 00:12:56.371 20:07:54 -- common/autotest_common.sh@1104 -- # accel_scan_dsa_modules_test_suite 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:12:56.371 20:07:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.371 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.371 [2024-04-25 20:07:54.188647] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:56.371 20:07:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:12:56.371 20:07:54 -- common/autotest_common.sh@640 -- # local es=0 00:12:56.371 20:07:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:12:56.371 20:07:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:56.371 20:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.371 20:07:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:56.371 20:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.371 20:07:54 -- common/autotest_common.sh@643 -- # rpc_cmd dsa_scan_accel_module 00:12:56.371 20:07:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.371 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.371 request: 00:12:56.371 { 00:12:56.371 "method": "dsa_scan_accel_module", 00:12:56.371 "req_id": 1 00:12:56.371 } 00:12:56.371 Got JSON-RPC error response 00:12:56.371 response: 00:12:56.371 { 00:12:56.371 "code": -114, 00:12:56.371 "message": "Operation already in progress" 00:12:56.371 } 00:12:56.371 20:07:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:56.371 20:07:54 -- common/autotest_common.sh@643 -- # es=1 00:12:56.371 20:07:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:56.371 20:07:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:56.371 20:07:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:56.371 00:12:56.371 real 0m0.021s 00:12:56.371 user 0m0.006s 00:12:56.371 sys 0m0.001s 00:12:56.371 20:07:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.371 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.371 ************************************ 00:12:56.371 END TEST accel_scan_dsa_modules 00:12:56.371 ************************************ 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:12:56.371 20:07:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:56.371 20:07:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.371 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.371 ************************************ 00:12:56.371 START TEST accel_scan_iaa_modules 00:12:56.371 ************************************ 00:12:56.371 20:07:54 -- common/autotest_common.sh@1104 -- # accel_scan_iaa_modules_test_suite 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:12:56.371 20:07:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.371 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.371 [2024-04-25 20:07:54.244649] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:56.371 20:07:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.371 20:07:54 -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:12:56.371 20:07:54 -- common/autotest_common.sh@640 -- # local es=0 00:12:56.371 20:07:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:12:56.371 20:07:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:56.371 20:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.371 20:07:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:56.371 20:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:56.371 20:07:54 -- common/autotest_common.sh@643 -- # rpc_cmd iaa_scan_accel_module 00:12:56.371 20:07:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.372 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.372 request: 00:12:56.372 { 00:12:56.372 "method": "iaa_scan_accel_module", 00:12:56.372 "req_id": 1 00:12:56.372 } 00:12:56.372 Got JSON-RPC error response 00:12:56.372 response: 00:12:56.372 { 00:12:56.372 "code": -114, 00:12:56.372 "message": "Operation already in progress" 00:12:56.372 } 00:12:56.372 20:07:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:56.372 20:07:54 -- common/autotest_common.sh@643 -- # es=1 00:12:56.372 20:07:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:56.372 20:07:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:56.372 20:07:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:56.372 00:12:56.372 real 0m0.020s 00:12:56.372 user 0m0.002s 00:12:56.372 sys 0m0.003s 00:12:56.372 20:07:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.372 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.372 ************************************ 00:12:56.372 END TEST accel_scan_iaa_modules 00:12:56.372 ************************************ 00:12:56.372 20:07:54 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:56.372 20:07:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:56.372 20:07:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.372 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.372 ************************************ 00:12:56.372 START TEST accel_assign_opcode 00:12:56.372 ************************************ 00:12:56.372 20:07:54 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:56.372 20:07:54 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:56.372 20:07:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.372 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.630 [2024-04-25 20:07:54.304689] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:56.630 20:07:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.630 20:07:54 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:56.630 20:07:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.630 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.630 [2024-04-25 20:07:54.312662] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:56.630 20:07:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.630 20:07:54 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:56.630 20:07:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.630 20:07:54 -- common/autotest_common.sh@10 -- # set +x 00:13:04.762 20:08:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.762 20:08:01 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:04.762 20:08:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.762 20:08:01 -- common/autotest_common.sh@10 -- # set +x 00:13:04.762 20:08:01 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:04.762 20:08:01 -- accel/accel_rpc.sh@42 -- # grep software 00:13:04.762 20:08:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.762 software 00:13:04.762 00:13:04.762 real 0m7.179s 00:13:04.762 user 0m0.032s 00:13:04.762 sys 0m0.011s 00:13:04.762 20:08:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.762 20:08:01 -- common/autotest_common.sh@10 -- # set +x 00:13:04.762 ************************************ 00:13:04.762 END TEST accel_assign_opcode 00:13:04.762 ************************************ 00:13:04.762 20:08:01 -- accel/accel_rpc.sh@55 -- # killprocess 1417495 00:13:04.762 20:08:01 -- common/autotest_common.sh@926 -- # '[' -z 1417495 ']' 00:13:04.762 20:08:01 -- common/autotest_common.sh@930 -- # kill -0 1417495 00:13:04.762 20:08:01 -- common/autotest_common.sh@931 -- # uname 00:13:04.762 20:08:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:04.762 20:08:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1417495 00:13:04.762 20:08:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:04.762 20:08:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:04.762 20:08:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1417495' 00:13:04.762 killing process with pid 1417495 00:13:04.762 20:08:01 -- common/autotest_common.sh@945 -- # kill 1417495 00:13:04.762 20:08:01 -- common/autotest_common.sh@950 -- # wait 1417495 00:13:06.752 00:13:06.752 real 0m11.107s 00:13:06.752 user 0m4.139s 00:13:06.752 sys 0m0.683s 00:13:06.752 20:08:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.752 20:08:04 -- common/autotest_common.sh@10 -- # set +x 00:13:06.752 ************************************ 00:13:06.752 END TEST accel_rpc 00:13:06.752 ************************************ 00:13:06.752 20:08:04 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:13:06.752 20:08:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:06.752 20:08:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:06.752 20:08:04 -- common/autotest_common.sh@10 -- # set +x 00:13:06.752 ************************************ 00:13:06.752 START TEST app_cmdline 00:13:06.753 ************************************ 00:13:06.753 20:08:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:13:06.753 * Looking for test storage... 00:13:06.753 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:13:06.753 20:08:04 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:06.753 20:08:04 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1419697 00:13:06.753 20:08:04 -- app/cmdline.sh@18 -- # waitforlisten 1419697 00:13:06.753 20:08:04 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:06.753 20:08:04 -- common/autotest_common.sh@819 -- # '[' -z 1419697 ']' 00:13:06.753 20:08:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.753 20:08:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:06.753 20:08:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.753 20:08:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:06.753 20:08:04 -- common/autotest_common.sh@10 -- # set +x 00:13:06.753 [2024-04-25 20:08:04.545842] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:06.753 [2024-04-25 20:08:04.545963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419697 ] 00:13:06.753 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.753 [2024-04-25 20:08:04.671684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.012 [2024-04-25 20:08:04.767655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:07.012 [2024-04-25 20:08:04.767880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.581 20:08:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:07.581 20:08:05 -- common/autotest_common.sh@852 -- # return 0 00:13:07.581 20:08:05 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:13:07.581 { 00:13:07.581 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:13:07.581 "fields": { 00:13:07.581 "major": 24, 00:13:07.581 "minor": 1, 00:13:07.581 "patch": 1, 00:13:07.581 "suffix": "-pre", 00:13:07.581 "commit": "36faa8c31" 00:13:07.581 } 00:13:07.581 } 00:13:07.581 20:08:05 -- app/cmdline.sh@22 -- # expected_methods=() 00:13:07.581 20:08:05 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:07.581 20:08:05 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:07.581 20:08:05 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:07.581 20:08:05 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:07.581 20:08:05 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:07.581 20:08:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.581 20:08:05 -- common/autotest_common.sh@10 -- # set +x 00:13:07.581 20:08:05 -- app/cmdline.sh@26 -- # sort 00:13:07.581 20:08:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.581 20:08:05 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:07.581 20:08:05 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:07.581 20:08:05 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:07.581 20:08:05 -- common/autotest_common.sh@640 -- # local es=0 00:13:07.581 20:08:05 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:07.581 20:08:05 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:07.581 20:08:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:07.581 20:08:05 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:07.581 20:08:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:07.581 20:08:05 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:07.581 20:08:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:07.581 20:08:05 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:07.581 20:08:05 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:13:07.581 20:08:05 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:07.839 request: 00:13:07.839 { 00:13:07.839 "method": "env_dpdk_get_mem_stats", 00:13:07.839 "req_id": 1 00:13:07.839 } 00:13:07.839 Got JSON-RPC error response 00:13:07.839 response: 00:13:07.839 { 00:13:07.839 "code": -32601, 00:13:07.839 "message": "Method not found" 00:13:07.839 } 00:13:07.839 20:08:05 -- common/autotest_common.sh@643 -- # es=1 00:13:07.839 20:08:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:07.839 20:08:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:07.839 20:08:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:07.839 20:08:05 -- app/cmdline.sh@1 -- # killprocess 1419697 00:13:07.839 20:08:05 -- common/autotest_common.sh@926 -- # '[' -z 1419697 ']' 00:13:07.839 20:08:05 -- common/autotest_common.sh@930 -- # kill -0 1419697 00:13:07.839 20:08:05 -- common/autotest_common.sh@931 -- # uname 00:13:07.839 20:08:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:07.839 20:08:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1419697 00:13:07.839 20:08:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:07.839 20:08:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:07.839 20:08:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1419697' 00:13:07.839 killing process with pid 1419697 00:13:07.839 20:08:05 -- common/autotest_common.sh@945 -- # kill 1419697 00:13:07.839 20:08:05 -- common/autotest_common.sh@950 -- # wait 1419697 00:13:08.779 00:13:08.779 real 0m2.074s 00:13:08.779 user 0m2.170s 00:13:08.779 sys 0m0.505s 00:13:08.779 20:08:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.779 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:13:08.779 ************************************ 00:13:08.779 END TEST app_cmdline 00:13:08.779 ************************************ 00:13:08.779 20:08:06 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:13:08.779 20:08:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:08.779 20:08:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.779 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:13:08.779 ************************************ 00:13:08.779 START TEST version 00:13:08.779 ************************************ 00:13:08.779 20:08:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:13:08.779 * Looking for test storage... 00:13:08.779 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:13:08.779 20:08:06 -- app/version.sh@17 -- # get_header_version major 00:13:08.779 20:08:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:08.779 20:08:06 -- app/version.sh@14 -- # tr -d '"' 00:13:08.779 20:08:06 -- app/version.sh@14 -- # cut -f2 00:13:08.779 20:08:06 -- app/version.sh@17 -- # major=24 00:13:08.779 20:08:06 -- app/version.sh@18 -- # get_header_version minor 00:13:08.779 20:08:06 -- app/version.sh@14 -- # tr -d '"' 00:13:08.779 20:08:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:08.779 20:08:06 -- app/version.sh@14 -- # cut -f2 00:13:08.779 20:08:06 -- app/version.sh@18 -- # minor=1 00:13:08.779 20:08:06 -- app/version.sh@19 -- # get_header_version patch 00:13:08.779 20:08:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:08.779 20:08:06 -- app/version.sh@14 -- # tr -d '"' 00:13:08.779 20:08:06 -- app/version.sh@14 -- # cut -f2 00:13:08.779 20:08:06 -- app/version.sh@19 -- # patch=1 00:13:08.779 20:08:06 -- app/version.sh@20 -- # get_header_version suffix 00:13:08.779 20:08:06 -- app/version.sh@14 -- # tr -d '"' 00:13:08.779 20:08:06 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:08.779 20:08:06 -- app/version.sh@14 -- # cut -f2 00:13:08.779 20:08:06 -- app/version.sh@20 -- # suffix=-pre 00:13:08.779 20:08:06 -- app/version.sh@22 -- # version=24.1 00:13:08.779 20:08:06 -- app/version.sh@25 -- # (( patch != 0 )) 00:13:08.780 20:08:06 -- app/version.sh@25 -- # version=24.1.1 00:13:08.780 20:08:06 -- app/version.sh@28 -- # version=24.1.1rc0 00:13:08.780 20:08:06 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:13:08.780 20:08:06 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:08.780 20:08:06 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:13:08.780 20:08:06 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:13:08.780 00:13:08.780 real 0m0.112s 00:13:08.780 user 0m0.057s 00:13:08.780 sys 0m0.087s 00:13:08.780 20:08:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.780 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:13:08.780 ************************************ 00:13:08.780 END TEST version 00:13:08.780 ************************************ 00:13:08.780 20:08:06 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:13:08.780 20:08:06 -- spdk/autotest.sh@204 -- # uname -s 00:13:08.780 20:08:06 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:13:08.780 20:08:06 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:13:08.780 20:08:06 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:13:08.780 20:08:06 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:13:08.780 20:08:06 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:13:08.780 20:08:06 -- spdk/autotest.sh@268 -- # timing_exit lib 00:13:08.780 20:08:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:08.780 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:13:08.780 20:08:06 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:13:08.780 20:08:06 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:13:08.780 20:08:06 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:13:08.780 20:08:06 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:13:08.780 20:08:06 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:13:08.780 20:08:06 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:13:08.780 20:08:06 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:08.780 20:08:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:08.780 20:08:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.780 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:13:08.780 ************************************ 00:13:08.780 START TEST nvmf_tcp 00:13:08.780 ************************************ 00:13:08.780 20:08:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:09.040 * Looking for test storage... 00:13:09.040 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:13:09.040 20:08:06 -- nvmf/nvmf.sh@10 -- # uname -s 00:13:09.040 20:08:06 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:09.040 20:08:06 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.040 20:08:06 -- nvmf/common.sh@7 -- # uname -s 00:13:09.040 20:08:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.040 20:08:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.040 20:08:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.040 20:08:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.040 20:08:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.040 20:08:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.040 20:08:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.040 20:08:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.040 20:08:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.040 20:08:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.040 20:08:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:09.040 20:08:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:09.040 20:08:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.040 20:08:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.040 20:08:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:09.040 20:08:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:09.040 20:08:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.040 20:08:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.040 20:08:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.040 20:08:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.040 20:08:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.040 20:08:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.040 20:08:06 -- paths/export.sh@5 -- # export PATH 00:13:09.040 20:08:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.040 20:08:06 -- nvmf/common.sh@46 -- # : 0 00:13:09.040 20:08:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:09.040 20:08:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:09.041 20:08:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:09.041 20:08:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.041 20:08:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.041 20:08:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:09.041 20:08:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:09.041 20:08:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:09.041 20:08:06 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:09.041 20:08:06 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:09.041 20:08:06 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:09.041 20:08:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:09.041 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:13:09.041 20:08:06 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:13:09.041 20:08:06 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:09.041 20:08:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:09.041 20:08:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.041 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:13:09.041 ************************************ 00:13:09.041 START TEST nvmf_example 00:13:09.041 ************************************ 00:13:09.041 20:08:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:09.041 * Looking for test storage... 00:13:09.041 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:09.041 20:08:06 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.041 20:08:06 -- nvmf/common.sh@7 -- # uname -s 00:13:09.041 20:08:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.041 20:08:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.041 20:08:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.041 20:08:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.041 20:08:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.041 20:08:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.041 20:08:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.041 20:08:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.041 20:08:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.041 20:08:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.041 20:08:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:09.041 20:08:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:09.041 20:08:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.041 20:08:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.041 20:08:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:09.041 20:08:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:09.041 20:08:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.041 20:08:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.041 20:08:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.041 20:08:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.041 20:08:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.041 20:08:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.041 20:08:06 -- paths/export.sh@5 -- # export PATH 00:13:09.041 20:08:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.041 20:08:06 -- nvmf/common.sh@46 -- # : 0 00:13:09.041 20:08:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:09.041 20:08:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:09.041 20:08:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:09.041 20:08:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.041 20:08:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.041 20:08:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:09.041 20:08:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:09.041 20:08:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:09.041 20:08:06 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:09.041 20:08:06 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:09.041 20:08:06 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:09.041 20:08:06 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:09.041 20:08:06 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:09.041 20:08:06 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:09.041 20:08:06 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:09.041 20:08:06 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:09.041 20:08:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:09.041 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:13:09.041 20:08:06 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:09.041 20:08:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:09.041 20:08:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.041 20:08:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:09.041 20:08:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:09.041 20:08:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:09.041 20:08:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.041 20:08:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.041 20:08:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.041 20:08:06 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:09.041 20:08:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:09.041 20:08:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:09.041 20:08:06 -- common/autotest_common.sh@10 -- # set +x 00:13:15.616 20:08:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:15.616 20:08:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:15.616 20:08:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:15.616 20:08:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:15.616 20:08:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:15.616 20:08:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:15.616 20:08:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:15.616 20:08:12 -- nvmf/common.sh@294 -- # net_devs=() 00:13:15.616 20:08:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:15.616 20:08:12 -- nvmf/common.sh@295 -- # e810=() 00:13:15.616 20:08:12 -- nvmf/common.sh@295 -- # local -ga e810 00:13:15.616 20:08:12 -- nvmf/common.sh@296 -- # x722=() 00:13:15.616 20:08:12 -- nvmf/common.sh@296 -- # local -ga x722 00:13:15.616 20:08:12 -- nvmf/common.sh@297 -- # mlx=() 00:13:15.616 20:08:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:15.616 20:08:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.616 20:08:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:15.616 20:08:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:15.616 20:08:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:15.616 20:08:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:15.616 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:15.616 20:08:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:15.616 20:08:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:15.616 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:15.616 20:08:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:15.616 20:08:12 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:15.616 20:08:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.616 20:08:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:15.616 20:08:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.616 20:08:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:15.616 Found net devices under 0000:27:00.0: cvl_0_0 00:13:15.616 20:08:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.616 20:08:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:15.616 20:08:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.616 20:08:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:15.616 20:08:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.616 20:08:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:15.616 Found net devices under 0000:27:00.1: cvl_0_1 00:13:15.616 20:08:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.616 20:08:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:15.616 20:08:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:15.616 20:08:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:15.616 20:08:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.616 20:08:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.616 20:08:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.616 20:08:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:15.616 20:08:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.616 20:08:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.616 20:08:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:15.616 20:08:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.616 20:08:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.616 20:08:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:15.616 20:08:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:15.616 20:08:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.616 20:08:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.616 20:08:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.616 20:08:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.616 20:08:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:15.616 20:08:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.616 20:08:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.616 20:08:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.616 20:08:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:15.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:13:15.616 00:13:15.616 --- 10.0.0.2 ping statistics --- 00:13:15.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.616 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:13:15.616 20:08:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:13:15.616 00:13:15.616 --- 10.0.0.1 ping statistics --- 00:13:15.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.616 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:13:15.616 20:08:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.616 20:08:12 -- nvmf/common.sh@410 -- # return 0 00:13:15.616 20:08:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:15.616 20:08:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.616 20:08:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:15.616 20:08:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.616 20:08:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:15.617 20:08:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:15.617 20:08:12 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:15.617 20:08:12 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:15.617 20:08:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:15.617 20:08:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.617 20:08:12 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:15.617 20:08:12 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:15.617 20:08:12 -- target/nvmf_example.sh@34 -- # nvmfpid=1423971 00:13:15.617 20:08:12 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:15.617 20:08:12 -- target/nvmf_example.sh@36 -- # waitforlisten 1423971 00:13:15.617 20:08:12 -- common/autotest_common.sh@819 -- # '[' -z 1423971 ']' 00:13:15.617 20:08:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.617 20:08:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:15.617 20:08:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.617 20:08:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:15.617 20:08:12 -- common/autotest_common.sh@10 -- # set +x 00:13:15.617 20:08:12 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:15.617 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.875 20:08:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:15.875 20:08:13 -- common/autotest_common.sh@852 -- # return 0 00:13:15.875 20:08:13 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:15.875 20:08:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:15.875 20:08:13 -- common/autotest_common.sh@10 -- # set +x 00:13:15.875 20:08:13 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.875 20:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.875 20:08:13 -- common/autotest_common.sh@10 -- # set +x 00:13:15.875 20:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.875 20:08:13 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:15.875 20:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.875 20:08:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.134 20:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.134 20:08:13 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:16.134 20:08:13 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:16.134 20:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.134 20:08:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.134 20:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.134 20:08:13 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:16.134 20:08:13 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:16.134 20:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.134 20:08:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.134 20:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.134 20:08:13 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.134 20:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.134 20:08:13 -- common/autotest_common.sh@10 -- # set +x 00:13:16.134 20:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.134 20:08:13 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:16.134 20:08:13 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:16.134 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.122 Initializing NVMe Controllers 00:13:26.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:26.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:26.122 Initialization complete. Launching workers. 00:13:26.122 ======================================================== 00:13:26.122 Latency(us) 00:13:26.122 Device Information : IOPS MiB/s Average min max 00:13:26.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18339.74 71.64 3489.30 697.43 15886.36 00:13:26.122 ======================================================== 00:13:26.122 Total : 18339.74 71.64 3489.30 697.43 15886.36 00:13:26.122 00:13:26.382 20:08:24 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:26.382 20:08:24 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:26.382 20:08:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:26.383 20:08:24 -- nvmf/common.sh@116 -- # sync 00:13:26.383 20:08:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:26.383 20:08:24 -- nvmf/common.sh@119 -- # set +e 00:13:26.383 20:08:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:26.383 20:08:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:26.383 rmmod nvme_tcp 00:13:26.383 rmmod nvme_fabrics 00:13:26.383 rmmod nvme_keyring 00:13:26.383 20:08:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:26.383 20:08:24 -- nvmf/common.sh@123 -- # set -e 00:13:26.383 20:08:24 -- nvmf/common.sh@124 -- # return 0 00:13:26.383 20:08:24 -- nvmf/common.sh@477 -- # '[' -n 1423971 ']' 00:13:26.383 20:08:24 -- nvmf/common.sh@478 -- # killprocess 1423971 00:13:26.383 20:08:24 -- common/autotest_common.sh@926 -- # '[' -z 1423971 ']' 00:13:26.383 20:08:24 -- common/autotest_common.sh@930 -- # kill -0 1423971 00:13:26.383 20:08:24 -- common/autotest_common.sh@931 -- # uname 00:13:26.383 20:08:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:26.383 20:08:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1423971 00:13:26.383 20:08:24 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:13:26.383 20:08:24 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:13:26.383 20:08:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1423971' 00:13:26.383 killing process with pid 1423971 00:13:26.383 20:08:24 -- common/autotest_common.sh@945 -- # kill 1423971 00:13:26.383 20:08:24 -- common/autotest_common.sh@950 -- # wait 1423971 00:13:26.949 nvmf threads initialize successfully 00:13:26.949 bdev subsystem init successfully 00:13:26.949 created a nvmf target service 00:13:26.949 create targets's poll groups done 00:13:26.949 all subsystems of target started 00:13:26.949 nvmf target is running 00:13:26.949 all subsystems of target stopped 00:13:26.949 destroy targets's poll groups done 00:13:26.949 destroyed the nvmf target service 00:13:26.949 bdev subsystem finish successfully 00:13:26.949 nvmf threads destroy successfully 00:13:26.949 20:08:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:26.949 20:08:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:26.949 20:08:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:26.949 20:08:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.949 20:08:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:26.949 20:08:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.949 20:08:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.949 20:08:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.860 20:08:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:28.860 20:08:26 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:28.860 20:08:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:28.860 20:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:28.860 00:13:28.860 real 0m19.931s 00:13:28.860 user 0m46.342s 00:13:28.860 sys 0m5.570s 00:13:28.860 20:08:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.860 20:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:28.860 ************************************ 00:13:28.860 END TEST nvmf_example 00:13:28.860 ************************************ 00:13:28.860 20:08:26 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:28.860 20:08:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:28.860 20:08:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.861 20:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:28.861 ************************************ 00:13:28.861 START TEST nvmf_filesystem 00:13:28.861 ************************************ 00:13:28.861 20:08:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:29.123 * Looking for test storage... 00:13:29.123 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.123 20:08:26 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:13:29.123 20:08:26 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:29.123 20:08:26 -- common/autotest_common.sh@34 -- # set -e 00:13:29.123 20:08:26 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:29.123 20:08:26 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:29.123 20:08:26 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:29.123 20:08:26 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:13:29.123 20:08:26 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:29.123 20:08:26 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:29.123 20:08:26 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:29.123 20:08:26 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:29.123 20:08:26 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:29.123 20:08:26 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:29.123 20:08:26 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:29.123 20:08:26 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:29.123 20:08:26 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:29.123 20:08:26 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:29.123 20:08:26 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:29.123 20:08:26 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:29.123 20:08:26 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:29.123 20:08:26 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:29.123 20:08:26 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:29.123 20:08:26 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:29.123 20:08:26 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:29.123 20:08:26 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:29.123 20:08:26 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:13:29.123 20:08:26 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:29.123 20:08:26 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:29.123 20:08:26 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:29.123 20:08:26 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:29.123 20:08:26 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:29.123 20:08:26 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:29.123 20:08:26 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:29.123 20:08:26 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:29.123 20:08:26 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:29.123 20:08:26 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:29.123 20:08:26 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:29.123 20:08:26 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:29.123 20:08:26 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:29.123 20:08:26 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:29.123 20:08:26 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:29.123 20:08:26 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:29.123 20:08:26 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:13:29.123 20:08:26 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:29.123 20:08:26 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:29.123 20:08:26 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:29.123 20:08:26 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:29.123 20:08:26 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:29.124 20:08:26 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:29.124 20:08:26 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:29.124 20:08:26 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:29.124 20:08:26 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:29.124 20:08:26 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:13:29.124 20:08:26 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:13:29.124 20:08:26 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:29.124 20:08:26 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:13:29.124 20:08:26 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:13:29.124 20:08:26 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:13:29.124 20:08:26 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:13:29.124 20:08:26 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:13:29.124 20:08:26 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:13:29.124 20:08:26 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:13:29.124 20:08:26 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:13:29.124 20:08:26 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:13:29.124 20:08:26 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:13:29.124 20:08:26 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:13:29.124 20:08:26 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:13:29.124 20:08:26 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:13:29.124 20:08:26 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:13:29.124 20:08:26 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:13:29.124 20:08:26 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:13:29.124 20:08:26 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:13:29.124 20:08:26 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:29.124 20:08:26 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:13:29.124 20:08:26 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:13:29.124 20:08:26 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:13:29.124 20:08:26 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:13:29.124 20:08:26 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:13:29.124 20:08:26 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:13:29.124 20:08:26 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:13:29.124 20:08:26 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:13:29.124 20:08:26 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:13:29.124 20:08:26 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:13:29.124 20:08:26 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:29.124 20:08:26 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:13:29.124 20:08:26 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:13:29.124 20:08:26 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:13:29.124 20:08:26 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:13:29.124 20:08:26 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:13:29.124 20:08:26 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:13:29.124 20:08:26 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:13:29.124 20:08:26 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:13:29.124 20:08:26 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:13:29.124 20:08:26 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:13:29.124 20:08:26 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:29.124 20:08:26 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:29.124 20:08:26 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:29.124 20:08:26 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:29.124 20:08:26 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:29.124 20:08:26 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:29.124 20:08:26 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:13:29.124 20:08:26 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:29.124 #define SPDK_CONFIG_H 00:13:29.124 #define SPDK_CONFIG_APPS 1 00:13:29.124 #define SPDK_CONFIG_ARCH native 00:13:29.124 #define SPDK_CONFIG_ASAN 1 00:13:29.124 #undef SPDK_CONFIG_AVAHI 00:13:29.124 #undef SPDK_CONFIG_CET 00:13:29.124 #define SPDK_CONFIG_COVERAGE 1 00:13:29.124 #define SPDK_CONFIG_CROSS_PREFIX 00:13:29.124 #undef SPDK_CONFIG_CRYPTO 00:13:29.124 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:29.124 #undef SPDK_CONFIG_CUSTOMOCF 00:13:29.124 #undef SPDK_CONFIG_DAOS 00:13:29.124 #define SPDK_CONFIG_DAOS_DIR 00:13:29.124 #define SPDK_CONFIG_DEBUG 1 00:13:29.124 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:29.124 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:13:29.124 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:29.124 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:29.124 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:29.124 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:13:29.124 #define SPDK_CONFIG_EXAMPLES 1 00:13:29.124 #undef SPDK_CONFIG_FC 00:13:29.124 #define SPDK_CONFIG_FC_PATH 00:13:29.124 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:29.124 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:29.124 #undef SPDK_CONFIG_FUSE 00:13:29.124 #undef SPDK_CONFIG_FUZZER 00:13:29.124 #define SPDK_CONFIG_FUZZER_LIB 00:13:29.124 #undef SPDK_CONFIG_GOLANG 00:13:29.124 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:29.124 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:29.124 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:29.124 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:29.124 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:29.124 #define SPDK_CONFIG_IDXD 1 00:13:29.124 #undef SPDK_CONFIG_IDXD_KERNEL 00:13:29.124 #undef SPDK_CONFIG_IPSEC_MB 00:13:29.124 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:29.124 #define SPDK_CONFIG_ISAL 1 00:13:29.124 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:29.124 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:29.124 #define SPDK_CONFIG_LIBDIR 00:13:29.124 #undef SPDK_CONFIG_LTO 00:13:29.124 #define SPDK_CONFIG_MAX_LCORES 00:13:29.124 #define SPDK_CONFIG_NVME_CUSE 1 00:13:29.124 #undef SPDK_CONFIG_OCF 00:13:29.124 #define SPDK_CONFIG_OCF_PATH 00:13:29.124 #define SPDK_CONFIG_OPENSSL_PATH 00:13:29.124 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:29.124 #undef SPDK_CONFIG_PGO_USE 00:13:29.124 #define SPDK_CONFIG_PREFIX /usr/local 00:13:29.124 #undef SPDK_CONFIG_RAID5F 00:13:29.124 #undef SPDK_CONFIG_RBD 00:13:29.124 #define SPDK_CONFIG_RDMA 1 00:13:29.124 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:29.124 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:29.124 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:29.124 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:29.124 #define SPDK_CONFIG_SHARED 1 00:13:29.124 #undef SPDK_CONFIG_SMA 00:13:29.124 #define SPDK_CONFIG_TESTS 1 00:13:29.124 #undef SPDK_CONFIG_TSAN 00:13:29.124 #define SPDK_CONFIG_UBLK 1 00:13:29.124 #define SPDK_CONFIG_UBSAN 1 00:13:29.124 #undef SPDK_CONFIG_UNIT_TESTS 00:13:29.124 #undef SPDK_CONFIG_URING 00:13:29.124 #define SPDK_CONFIG_URING_PATH 00:13:29.124 #undef SPDK_CONFIG_URING_ZNS 00:13:29.124 #undef SPDK_CONFIG_USDT 00:13:29.124 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:29.124 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:29.124 #undef SPDK_CONFIG_VFIO_USER 00:13:29.124 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:29.124 #define SPDK_CONFIG_VHOST 1 00:13:29.124 #define SPDK_CONFIG_VIRTIO 1 00:13:29.124 #undef SPDK_CONFIG_VTUNE 00:13:29.124 #define SPDK_CONFIG_VTUNE_DIR 00:13:29.124 #define SPDK_CONFIG_WERROR 1 00:13:29.124 #define SPDK_CONFIG_WPDK_DIR 00:13:29.124 #undef SPDK_CONFIG_XNVME 00:13:29.124 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:29.124 20:08:26 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:29.124 20:08:26 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:29.124 20:08:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.124 20:08:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.124 20:08:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.124 20:08:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.125 20:08:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.125 20:08:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.125 20:08:26 -- paths/export.sh@5 -- # export PATH 00:13:29.125 20:08:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.125 20:08:26 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:13:29.125 20:08:26 -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:13:29.125 20:08:26 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:13:29.125 20:08:26 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:13:29.125 20:08:26 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:29.125 20:08:26 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:13:29.125 20:08:26 -- pm/common@16 -- # TEST_TAG=N/A 00:13:29.125 20:08:26 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:13:29.125 20:08:26 -- common/autotest_common.sh@52 -- # : 1 00:13:29.125 20:08:26 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:13:29.125 20:08:26 -- common/autotest_common.sh@56 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:29.125 20:08:26 -- common/autotest_common.sh@58 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:13:29.125 20:08:26 -- common/autotest_common.sh@60 -- # : 1 00:13:29.125 20:08:26 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:29.125 20:08:26 -- common/autotest_common.sh@62 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:13:29.125 20:08:26 -- common/autotest_common.sh@64 -- # : 00:13:29.125 20:08:26 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:13:29.125 20:08:26 -- common/autotest_common.sh@66 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:13:29.125 20:08:26 -- common/autotest_common.sh@68 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:13:29.125 20:08:26 -- common/autotest_common.sh@70 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:13:29.125 20:08:26 -- common/autotest_common.sh@72 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:29.125 20:08:26 -- common/autotest_common.sh@74 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:13:29.125 20:08:26 -- common/autotest_common.sh@76 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:13:29.125 20:08:26 -- common/autotest_common.sh@78 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:13:29.125 20:08:26 -- common/autotest_common.sh@80 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:13:29.125 20:08:26 -- common/autotest_common.sh@82 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:13:29.125 20:08:26 -- common/autotest_common.sh@84 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:13:29.125 20:08:26 -- common/autotest_common.sh@86 -- # : 1 00:13:29.125 20:08:26 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:13:29.125 20:08:26 -- common/autotest_common.sh@88 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:13:29.125 20:08:26 -- common/autotest_common.sh@90 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:29.125 20:08:26 -- common/autotest_common.sh@92 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:13:29.125 20:08:26 -- common/autotest_common.sh@94 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:13:29.125 20:08:26 -- common/autotest_common.sh@96 -- # : tcp 00:13:29.125 20:08:26 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:29.125 20:08:26 -- common/autotest_common.sh@98 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:13:29.125 20:08:26 -- common/autotest_common.sh@100 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:13:29.125 20:08:26 -- common/autotest_common.sh@102 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:13:29.125 20:08:26 -- common/autotest_common.sh@104 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:13:29.125 20:08:26 -- common/autotest_common.sh@106 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:13:29.125 20:08:26 -- common/autotest_common.sh@108 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:13:29.125 20:08:26 -- common/autotest_common.sh@110 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:13:29.125 20:08:26 -- common/autotest_common.sh@112 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:29.125 20:08:26 -- common/autotest_common.sh@114 -- # : 1 00:13:29.125 20:08:26 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:13:29.125 20:08:26 -- common/autotest_common.sh@116 -- # : 1 00:13:29.125 20:08:26 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:13:29.125 20:08:26 -- common/autotest_common.sh@118 -- # : 00:13:29.125 20:08:26 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:29.125 20:08:26 -- common/autotest_common.sh@120 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:13:29.125 20:08:26 -- common/autotest_common.sh@122 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:13:29.125 20:08:26 -- common/autotest_common.sh@124 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:13:29.125 20:08:26 -- common/autotest_common.sh@126 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:13:29.125 20:08:26 -- common/autotest_common.sh@128 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:13:29.125 20:08:26 -- common/autotest_common.sh@130 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:13:29.125 20:08:26 -- common/autotest_common.sh@132 -- # : 00:13:29.125 20:08:26 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:13:29.125 20:08:26 -- common/autotest_common.sh@134 -- # : true 00:13:29.125 20:08:26 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:13:29.125 20:08:26 -- common/autotest_common.sh@136 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:13:29.125 20:08:26 -- common/autotest_common.sh@138 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:13:29.125 20:08:26 -- common/autotest_common.sh@140 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:13:29.125 20:08:26 -- common/autotest_common.sh@142 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:13:29.125 20:08:26 -- common/autotest_common.sh@144 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:13:29.125 20:08:26 -- common/autotest_common.sh@146 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:13:29.125 20:08:26 -- common/autotest_common.sh@148 -- # : 00:13:29.125 20:08:26 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:13:29.125 20:08:26 -- common/autotest_common.sh@150 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:13:29.125 20:08:26 -- common/autotest_common.sh@152 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:13:29.125 20:08:26 -- common/autotest_common.sh@154 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:13:29.125 20:08:26 -- common/autotest_common.sh@156 -- # : 1 00:13:29.125 20:08:26 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:13:29.125 20:08:26 -- common/autotest_common.sh@158 -- # : 1 00:13:29.125 20:08:26 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:13:29.125 20:08:26 -- common/autotest_common.sh@160 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:13:29.125 20:08:26 -- common/autotest_common.sh@163 -- # : 00:13:29.125 20:08:26 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:13:29.125 20:08:26 -- common/autotest_common.sh@165 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:13:29.125 20:08:26 -- common/autotest_common.sh@167 -- # : 0 00:13:29.125 20:08:26 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:29.125 20:08:26 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:13:29.125 20:08:26 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:13:29.126 20:08:26 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:13:29.126 20:08:26 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:13:29.126 20:08:26 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:29.126 20:08:26 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:29.126 20:08:26 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:29.126 20:08:26 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:29.126 20:08:26 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:29.126 20:08:26 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:29.126 20:08:26 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:13:29.126 20:08:26 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:13:29.126 20:08:26 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:29.126 20:08:26 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:13:29.126 20:08:26 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:29.126 20:08:26 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:29.126 20:08:26 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:29.126 20:08:26 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:29.126 20:08:26 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:29.126 20:08:26 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:13:29.126 20:08:26 -- common/autotest_common.sh@196 -- # cat 00:13:29.126 20:08:26 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:13:29.126 20:08:26 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:29.126 20:08:26 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:29.126 20:08:26 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:29.126 20:08:26 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:29.126 20:08:26 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:13:29.126 20:08:26 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:13:29.126 20:08:26 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:13:29.126 20:08:26 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:13:29.126 20:08:26 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:13:29.126 20:08:26 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:13:29.126 20:08:26 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:29.126 20:08:26 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:29.126 20:08:26 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:29.126 20:08:26 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:29.126 20:08:26 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:29.126 20:08:26 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:29.126 20:08:26 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:29.126 20:08:26 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:29.126 20:08:26 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:13:29.126 20:08:26 -- common/autotest_common.sh@249 -- # export valgrind= 00:13:29.126 20:08:26 -- common/autotest_common.sh@249 -- # valgrind= 00:13:29.126 20:08:26 -- common/autotest_common.sh@255 -- # uname -s 00:13:29.126 20:08:26 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:13:29.126 20:08:26 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:13:29.126 20:08:26 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:13:29.126 20:08:26 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:13:29.126 20:08:26 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:13:29.126 20:08:26 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:13:29.126 20:08:26 -- common/autotest_common.sh@265 -- # MAKE=make 00:13:29.126 20:08:26 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j128 00:13:29.126 20:08:26 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:13:29.126 20:08:26 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:13:29.126 20:08:26 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:13:29.126 20:08:26 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:13:29.126 20:08:26 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:13:29.126 20:08:26 -- common/autotest_common.sh@291 -- # for i in "$@" 00:13:29.126 20:08:26 -- common/autotest_common.sh@292 -- # case "$i" in 00:13:29.126 20:08:26 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:13:29.126 20:08:26 -- common/autotest_common.sh@309 -- # [[ -z 1426774 ]] 00:13:29.126 20:08:26 -- common/autotest_common.sh@309 -- # kill -0 1426774 00:13:29.126 20:08:26 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:13:29.126 20:08:26 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:13:29.126 20:08:26 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:13:29.126 20:08:26 -- common/autotest_common.sh@322 -- # local mount target_dir 00:13:29.126 20:08:26 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:13:29.126 20:08:26 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:13:29.126 20:08:26 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:13:29.126 20:08:26 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:13:29.126 20:08:26 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.e4EmTJ 00:13:29.126 20:08:26 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:29.126 20:08:26 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:13:29.126 20:08:26 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:13:29.126 20:08:26 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.e4EmTJ/tests/target /tmp/spdk.e4EmTJ 00:13:29.126 20:08:26 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:13:29.126 20:08:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.126 20:08:26 -- common/autotest_common.sh@318 -- # df -T 00:13:29.126 20:08:26 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:13:29.126 20:08:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:13:29.126 20:08:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:13:29.126 20:08:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:13:29.126 20:08:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:13:29.126 20:08:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:13:29.126 20:08:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.126 20:08:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:13:29.126 20:08:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:13:29.126 20:08:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=991178752 00:13:29.126 20:08:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:13:29.126 20:08:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=4293251072 00:13:29.126 20:08:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.126 20:08:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:13:29.126 20:08:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:13:29.126 20:08:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=123024781312 00:13:29.126 20:08:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129472466944 00:13:29.126 20:08:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=6447685632 00:13:29.126 20:08:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.126 20:08:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:29.126 20:08:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:29.126 20:08:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=64733638656 00:13:29.126 20:08:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64736231424 00:13:29.126 20:08:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:13:29.126 20:08:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.126 20:08:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:29.127 20:08:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:29.127 20:08:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=25884811264 00:13:29.127 20:08:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25894494208 00:13:29.127 20:08:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=9682944 00:13:29.127 20:08:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.127 20:08:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:13:29.127 20:08:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:13:29.127 20:08:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=66560 00:13:29.127 20:08:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:13:29.127 20:08:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=437248 00:13:29.127 20:08:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.127 20:08:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:29.127 20:08:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:29.127 20:08:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=64735371264 00:13:29.127 20:08:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64736235520 00:13:29.127 20:08:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=864256 00:13:29.127 20:08:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.127 20:08:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:29.127 20:08:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:29.127 20:08:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=12947238912 00:13:29.127 20:08:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12947243008 00:13:29.127 20:08:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:13:29.127 20:08:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.127 20:08:26 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:13:29.127 * Looking for test storage... 00:13:29.127 20:08:26 -- common/autotest_common.sh@359 -- # local target_space new_size 00:13:29.127 20:08:26 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:13:29.127 20:08:26 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.127 20:08:26 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:29.127 20:08:26 -- common/autotest_common.sh@363 -- # mount=/ 00:13:29.127 20:08:26 -- common/autotest_common.sh@365 -- # target_space=123024781312 00:13:29.127 20:08:26 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:13:29.127 20:08:26 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:13:29.127 20:08:26 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:13:29.127 20:08:26 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:13:29.127 20:08:26 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:13:29.127 20:08:26 -- common/autotest_common.sh@372 -- # new_size=8662278144 00:13:29.127 20:08:26 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:29.127 20:08:26 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.127 20:08:26 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.127 20:08:26 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.127 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.127 20:08:26 -- common/autotest_common.sh@380 -- # return 0 00:13:29.127 20:08:26 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:13:29.127 20:08:26 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:13:29.127 20:08:26 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:29.127 20:08:26 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:29.127 20:08:26 -- common/autotest_common.sh@1672 -- # true 00:13:29.127 20:08:26 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:13:29.127 20:08:26 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:13:29.127 20:08:26 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:13:29.127 20:08:26 -- common/autotest_common.sh@27 -- # exec 00:13:29.127 20:08:26 -- common/autotest_common.sh@29 -- # exec 00:13:29.127 20:08:26 -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:29.127 20:08:26 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:29.127 20:08:26 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:29.127 20:08:26 -- common/autotest_common.sh@18 -- # set -x 00:13:29.127 20:08:26 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.127 20:08:26 -- nvmf/common.sh@7 -- # uname -s 00:13:29.127 20:08:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.127 20:08:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.127 20:08:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.127 20:08:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.127 20:08:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.127 20:08:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.127 20:08:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.127 20:08:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.127 20:08:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.127 20:08:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.127 20:08:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:29.127 20:08:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:29.127 20:08:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.127 20:08:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.127 20:08:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:29.127 20:08:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:29.127 20:08:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.127 20:08:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.127 20:08:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.127 20:08:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.127 20:08:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.127 20:08:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.127 20:08:26 -- paths/export.sh@5 -- # export PATH 00:13:29.127 20:08:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.127 20:08:26 -- nvmf/common.sh@46 -- # : 0 00:13:29.127 20:08:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:29.127 20:08:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:29.127 20:08:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:29.127 20:08:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.127 20:08:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.127 20:08:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:29.127 20:08:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:29.127 20:08:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:29.127 20:08:26 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:29.127 20:08:26 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:29.127 20:08:26 -- target/filesystem.sh@15 -- # nvmftestinit 00:13:29.127 20:08:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:29.127 20:08:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.127 20:08:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:29.127 20:08:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:29.127 20:08:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:29.127 20:08:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.127 20:08:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.127 20:08:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.127 20:08:26 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:29.127 20:08:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:29.127 20:08:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:29.127 20:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:34.406 20:08:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.406 20:08:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:34.406 20:08:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:34.406 20:08:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:34.406 20:08:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:34.406 20:08:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:34.406 20:08:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:34.406 20:08:32 -- nvmf/common.sh@294 -- # net_devs=() 00:13:34.406 20:08:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:34.406 20:08:32 -- nvmf/common.sh@295 -- # e810=() 00:13:34.406 20:08:32 -- nvmf/common.sh@295 -- # local -ga e810 00:13:34.406 20:08:32 -- nvmf/common.sh@296 -- # x722=() 00:13:34.406 20:08:32 -- nvmf/common.sh@296 -- # local -ga x722 00:13:34.406 20:08:32 -- nvmf/common.sh@297 -- # mlx=() 00:13:34.406 20:08:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:34.406 20:08:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.406 20:08:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:34.406 20:08:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:34.406 20:08:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.406 20:08:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:34.406 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:34.406 20:08:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.406 20:08:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:34.406 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:34.406 20:08:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:34.406 20:08:32 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.406 20:08:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.406 20:08:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.406 20:08:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.406 20:08:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:34.406 Found net devices under 0000:27:00.0: cvl_0_0 00:13:34.406 20:08:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.406 20:08:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.406 20:08:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.406 20:08:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.406 20:08:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.406 20:08:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:34.406 Found net devices under 0000:27:00.1: cvl_0_1 00:13:34.406 20:08:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.406 20:08:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:34.406 20:08:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:34.406 20:08:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:34.406 20:08:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:34.406 20:08:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.406 20:08:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.406 20:08:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.406 20:08:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:34.406 20:08:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.406 20:08:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.406 20:08:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:34.406 20:08:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.406 20:08:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.406 20:08:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:34.406 20:08:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:34.406 20:08:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.406 20:08:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.663 20:08:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.663 20:08:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.663 20:08:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:34.663 20:08:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.663 20:08:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.663 20:08:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.663 20:08:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:34.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:13:34.663 00:13:34.663 --- 10.0.0.2 ping statistics --- 00:13:34.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.663 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:13:34.664 20:08:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:13:34.664 00:13:34.664 --- 10.0.0.1 ping statistics --- 00:13:34.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.664 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:34.664 20:08:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.664 20:08:32 -- nvmf/common.sh@410 -- # return 0 00:13:34.664 20:08:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:34.664 20:08:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.664 20:08:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:34.664 20:08:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:34.664 20:08:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.664 20:08:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:34.664 20:08:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:34.664 20:08:32 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:34.664 20:08:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:34.664 20:08:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:34.664 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.664 ************************************ 00:13:34.664 START TEST nvmf_filesystem_no_in_capsule 00:13:34.664 ************************************ 00:13:34.664 20:08:32 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:13:34.664 20:08:32 -- target/filesystem.sh@47 -- # in_capsule=0 00:13:34.664 20:08:32 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:34.664 20:08:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:34.664 20:08:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:34.664 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.664 20:08:32 -- nvmf/common.sh@469 -- # nvmfpid=1430318 00:13:34.664 20:08:32 -- nvmf/common.sh@470 -- # waitforlisten 1430318 00:13:34.664 20:08:32 -- common/autotest_common.sh@819 -- # '[' -z 1430318 ']' 00:13:34.664 20:08:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.664 20:08:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:34.664 20:08:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.664 20:08:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:34.664 20:08:32 -- common/autotest_common.sh@10 -- # set +x 00:13:34.664 20:08:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.664 [2024-04-25 20:08:32.579702] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:34.664 [2024-04-25 20:08:32.579803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.924 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.924 [2024-04-25 20:08:32.697748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.924 [2024-04-25 20:08:32.796479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:34.924 [2024-04-25 20:08:32.796674] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.924 [2024-04-25 20:08:32.796688] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.924 [2024-04-25 20:08:32.796698] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.924 [2024-04-25 20:08:32.796859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.924 [2024-04-25 20:08:32.796954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.924 [2024-04-25 20:08:32.797058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.924 [2024-04-25 20:08:32.797066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.493 20:08:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:35.493 20:08:33 -- common/autotest_common.sh@852 -- # return 0 00:13:35.493 20:08:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:35.493 20:08:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:35.493 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:13:35.493 20:08:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.493 20:08:33 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:35.493 20:08:33 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:35.493 20:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.493 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:13:35.493 [2024-04-25 20:08:33.330028] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.493 20:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.493 20:08:33 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:35.493 20:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.493 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:13:35.752 Malloc1 00:13:35.752 20:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.752 20:08:33 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:35.752 20:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.752 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:13:35.752 20:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.752 20:08:33 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:35.752 20:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.752 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:13:35.752 20:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.752 20:08:33 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.752 20:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.752 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:13:35.752 [2024-04-25 20:08:33.607802] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.752 20:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.752 20:08:33 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:35.752 20:08:33 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:13:35.752 20:08:33 -- common/autotest_common.sh@1358 -- # local bdev_info 00:13:35.752 20:08:33 -- common/autotest_common.sh@1359 -- # local bs 00:13:35.752 20:08:33 -- common/autotest_common.sh@1360 -- # local nb 00:13:35.752 20:08:33 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:35.752 20:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.752 20:08:33 -- common/autotest_common.sh@10 -- # set +x 00:13:35.752 20:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.752 20:08:33 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:13:35.752 { 00:13:35.752 "name": "Malloc1", 00:13:35.752 "aliases": [ 00:13:35.752 "ec4665d2-968f-4fa7-b5e6-4d1410548123" 00:13:35.752 ], 00:13:35.752 "product_name": "Malloc disk", 00:13:35.752 "block_size": 512, 00:13:35.752 "num_blocks": 1048576, 00:13:35.752 "uuid": "ec4665d2-968f-4fa7-b5e6-4d1410548123", 00:13:35.752 "assigned_rate_limits": { 00:13:35.752 "rw_ios_per_sec": 0, 00:13:35.752 "rw_mbytes_per_sec": 0, 00:13:35.752 "r_mbytes_per_sec": 0, 00:13:35.752 "w_mbytes_per_sec": 0 00:13:35.752 }, 00:13:35.752 "claimed": true, 00:13:35.752 "claim_type": "exclusive_write", 00:13:35.752 "zoned": false, 00:13:35.752 "supported_io_types": { 00:13:35.752 "read": true, 00:13:35.752 "write": true, 00:13:35.752 "unmap": true, 00:13:35.752 "write_zeroes": true, 00:13:35.752 "flush": true, 00:13:35.752 "reset": true, 00:13:35.752 "compare": false, 00:13:35.752 "compare_and_write": false, 00:13:35.752 "abort": true, 00:13:35.752 "nvme_admin": false, 00:13:35.752 "nvme_io": false 00:13:35.752 }, 00:13:35.752 "memory_domains": [ 00:13:35.752 { 00:13:35.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.752 "dma_device_type": 2 00:13:35.752 } 00:13:35.752 ], 00:13:35.752 "driver_specific": {} 00:13:35.752 } 00:13:35.752 ]' 00:13:35.752 20:08:33 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:13:35.752 20:08:33 -- common/autotest_common.sh@1362 -- # bs=512 00:13:35.752 20:08:33 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:13:36.010 20:08:33 -- common/autotest_common.sh@1363 -- # nb=1048576 00:13:36.010 20:08:33 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:13:36.010 20:08:33 -- common/autotest_common.sh@1367 -- # echo 512 00:13:36.010 20:08:33 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:36.010 20:08:33 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.390 20:08:35 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.390 20:08:35 -- common/autotest_common.sh@1177 -- # local i=0 00:13:37.390 20:08:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.390 20:08:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:37.390 20:08:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:39.299 20:08:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:39.299 20:08:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:39.299 20:08:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.299 20:08:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:39.299 20:08:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.299 20:08:37 -- common/autotest_common.sh@1187 -- # return 0 00:13:39.299 20:08:37 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:39.299 20:08:37 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:39.299 20:08:37 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:39.299 20:08:37 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:39.299 20:08:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:39.299 20:08:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:39.299 20:08:37 -- setup/common.sh@80 -- # echo 536870912 00:13:39.299 20:08:37 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:39.299 20:08:37 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:39.559 20:08:37 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:39.559 20:08:37 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:39.867 20:08:37 -- target/filesystem.sh@69 -- # partprobe 00:13:40.458 20:08:38 -- target/filesystem.sh@70 -- # sleep 1 00:13:41.397 20:08:39 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:41.397 20:08:39 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:41.397 20:08:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:41.397 20:08:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.397 20:08:39 -- common/autotest_common.sh@10 -- # set +x 00:13:41.397 ************************************ 00:13:41.397 START TEST filesystem_ext4 00:13:41.397 ************************************ 00:13:41.397 20:08:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:41.397 20:08:39 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:41.397 20:08:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:41.397 20:08:39 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:41.397 20:08:39 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:13:41.397 20:08:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:41.397 20:08:39 -- common/autotest_common.sh@904 -- # local i=0 00:13:41.397 20:08:39 -- common/autotest_common.sh@905 -- # local force 00:13:41.397 20:08:39 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:13:41.397 20:08:39 -- common/autotest_common.sh@908 -- # force=-F 00:13:41.397 20:08:39 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:41.397 mke2fs 1.46.5 (30-Dec-2021) 00:13:41.397 Discarding device blocks: 0/522240 done 00:13:41.397 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:41.397 Filesystem UUID: 4f70b6b1-e7ce-486d-9e83-9c6018af1930 00:13:41.397 Superblock backups stored on blocks: 00:13:41.397 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:41.397 00:13:41.397 Allocating group tables: 0/64 done 00:13:41.397 Writing inode tables: 0/64 done 00:13:41.658 Creating journal (8192 blocks): done 00:13:41.658 Writing superblocks and filesystem accounting information: 0/64 done 00:13:41.658 00:13:41.658 20:08:39 -- common/autotest_common.sh@921 -- # return 0 00:13:41.658 20:08:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:41.658 20:08:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:41.919 20:08:39 -- target/filesystem.sh@25 -- # sync 00:13:41.919 20:08:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:41.919 20:08:39 -- target/filesystem.sh@27 -- # sync 00:13:41.919 20:08:39 -- target/filesystem.sh@29 -- # i=0 00:13:41.919 20:08:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:41.919 20:08:39 -- target/filesystem.sh@37 -- # kill -0 1430318 00:13:41.919 20:08:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:41.919 20:08:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:41.919 20:08:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:41.919 20:08:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:41.919 00:13:41.919 real 0m0.470s 00:13:41.919 user 0m0.023s 00:13:41.919 sys 0m0.040s 00:13:41.919 20:08:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.919 20:08:39 -- common/autotest_common.sh@10 -- # set +x 00:13:41.919 ************************************ 00:13:41.919 END TEST filesystem_ext4 00:13:41.919 ************************************ 00:13:41.919 20:08:39 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:41.919 20:08:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:41.919 20:08:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.919 20:08:39 -- common/autotest_common.sh@10 -- # set +x 00:13:41.919 ************************************ 00:13:41.919 START TEST filesystem_btrfs 00:13:41.919 ************************************ 00:13:41.919 20:08:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:41.919 20:08:39 -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:41.919 20:08:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:41.919 20:08:39 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:41.919 20:08:39 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:13:41.919 20:08:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:41.919 20:08:39 -- common/autotest_common.sh@904 -- # local i=0 00:13:41.919 20:08:39 -- common/autotest_common.sh@905 -- # local force 00:13:41.919 20:08:39 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:13:41.919 20:08:39 -- common/autotest_common.sh@910 -- # force=-f 00:13:41.919 20:08:39 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:42.178 btrfs-progs v6.6.2 00:13:42.178 See https://btrfs.readthedocs.io for more information. 00:13:42.178 00:13:42.178 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:42.178 NOTE: several default settings have changed in version 5.15, please make sure 00:13:42.178 this does not affect your deployments: 00:13:42.178 - DUP for metadata (-m dup) 00:13:42.178 - enabled no-holes (-O no-holes) 00:13:42.179 - enabled free-space-tree (-R free-space-tree) 00:13:42.179 00:13:42.179 Label: (null) 00:13:42.179 UUID: 838e5b8d-7ab5-4810-98f8-0dc92297e77a 00:13:42.179 Node size: 16384 00:13:42.179 Sector size: 4096 00:13:42.179 Filesystem size: 510.00MiB 00:13:42.179 Block group profiles: 00:13:42.179 Data: single 8.00MiB 00:13:42.179 Metadata: DUP 32.00MiB 00:13:42.179 System: DUP 8.00MiB 00:13:42.179 SSD detected: yes 00:13:42.179 Zoned device: no 00:13:42.179 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:42.179 Runtime features: free-space-tree 00:13:42.179 Checksum: crc32c 00:13:42.179 Number of devices: 1 00:13:42.179 Devices: 00:13:42.179 ID SIZE PATH 00:13:42.179 1 510.00MiB /dev/nvme0n1p1 00:13:42.179 00:13:42.179 20:08:39 -- common/autotest_common.sh@921 -- # return 0 00:13:42.179 20:08:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:42.748 20:08:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:42.748 20:08:40 -- target/filesystem.sh@25 -- # sync 00:13:42.748 20:08:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:42.748 20:08:40 -- target/filesystem.sh@27 -- # sync 00:13:42.748 20:08:40 -- target/filesystem.sh@29 -- # i=0 00:13:42.748 20:08:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:42.748 20:08:40 -- target/filesystem.sh@37 -- # kill -0 1430318 00:13:42.748 20:08:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:42.748 20:08:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:42.748 20:08:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:42.748 20:08:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:42.748 00:13:42.748 real 0m0.849s 00:13:42.748 user 0m0.017s 00:13:42.748 sys 0m0.056s 00:13:42.748 20:08:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.748 20:08:40 -- common/autotest_common.sh@10 -- # set +x 00:13:42.748 ************************************ 00:13:42.748 END TEST filesystem_btrfs 00:13:42.748 ************************************ 00:13:42.748 20:08:40 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:42.748 20:08:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:42.748 20:08:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.748 20:08:40 -- common/autotest_common.sh@10 -- # set +x 00:13:42.748 ************************************ 00:13:42.748 START TEST filesystem_xfs 00:13:42.748 ************************************ 00:13:42.748 20:08:40 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:13:42.748 20:08:40 -- target/filesystem.sh@18 -- # fstype=xfs 00:13:42.748 20:08:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:42.748 20:08:40 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:42.748 20:08:40 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:13:42.748 20:08:40 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:42.748 20:08:40 -- common/autotest_common.sh@904 -- # local i=0 00:13:42.748 20:08:40 -- common/autotest_common.sh@905 -- # local force 00:13:42.748 20:08:40 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:13:42.748 20:08:40 -- common/autotest_common.sh@910 -- # force=-f 00:13:42.748 20:08:40 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:42.748 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:42.748 = sectsz=512 attr=2, projid32bit=1 00:13:42.748 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:42.748 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:42.748 data = bsize=4096 blocks=130560, imaxpct=25 00:13:42.748 = sunit=0 swidth=0 blks 00:13:42.748 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:42.748 log =internal log bsize=4096 blocks=16384, version=2 00:13:42.748 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:42.748 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:43.687 Discarding blocks...Done. 00:13:43.687 20:08:41 -- common/autotest_common.sh@921 -- # return 0 00:13:43.687 20:08:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:46.226 20:08:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:46.226 20:08:43 -- target/filesystem.sh@25 -- # sync 00:13:46.226 20:08:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:46.226 20:08:43 -- target/filesystem.sh@27 -- # sync 00:13:46.226 20:08:43 -- target/filesystem.sh@29 -- # i=0 00:13:46.226 20:08:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:46.226 20:08:43 -- target/filesystem.sh@37 -- # kill -0 1430318 00:13:46.226 20:08:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:46.226 20:08:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:46.226 20:08:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:46.226 20:08:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:46.226 00:13:46.226 real 0m3.175s 00:13:46.226 user 0m0.021s 00:13:46.226 sys 0m0.050s 00:13:46.226 20:08:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.226 20:08:43 -- common/autotest_common.sh@10 -- # set +x 00:13:46.226 ************************************ 00:13:46.226 END TEST filesystem_xfs 00:13:46.226 ************************************ 00:13:46.226 20:08:43 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:46.226 20:08:44 -- target/filesystem.sh@93 -- # sync 00:13:46.226 20:08:44 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.485 20:08:44 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:46.485 20:08:44 -- common/autotest_common.sh@1198 -- # local i=0 00:13:46.485 20:08:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:46.485 20:08:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.485 20:08:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:46.485 20:08:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.485 20:08:44 -- common/autotest_common.sh@1210 -- # return 0 00:13:46.485 20:08:44 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.485 20:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.485 20:08:44 -- common/autotest_common.sh@10 -- # set +x 00:13:46.485 20:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.485 20:08:44 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:46.485 20:08:44 -- target/filesystem.sh@101 -- # killprocess 1430318 00:13:46.485 20:08:44 -- common/autotest_common.sh@926 -- # '[' -z 1430318 ']' 00:13:46.485 20:08:44 -- common/autotest_common.sh@930 -- # kill -0 1430318 00:13:46.485 20:08:44 -- common/autotest_common.sh@931 -- # uname 00:13:46.485 20:08:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:46.485 20:08:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1430318 00:13:46.485 20:08:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:46.485 20:08:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:46.485 20:08:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1430318' 00:13:46.485 killing process with pid 1430318 00:13:46.485 20:08:44 -- common/autotest_common.sh@945 -- # kill 1430318 00:13:46.485 20:08:44 -- common/autotest_common.sh@950 -- # wait 1430318 00:13:47.422 20:08:45 -- target/filesystem.sh@102 -- # nvmfpid= 00:13:47.422 00:13:47.422 real 0m12.666s 00:13:47.422 user 0m48.835s 00:13:47.422 sys 0m0.984s 00:13:47.422 20:08:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.422 20:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 ************************************ 00:13:47.422 END TEST nvmf_filesystem_no_in_capsule 00:13:47.422 ************************************ 00:13:47.422 20:08:45 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:47.422 20:08:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:47.422 20:08:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.422 20:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 ************************************ 00:13:47.422 START TEST nvmf_filesystem_in_capsule 00:13:47.422 ************************************ 00:13:47.422 20:08:45 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:13:47.422 20:08:45 -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:47.422 20:08:45 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:47.422 20:08:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:47.422 20:08:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:47.422 20:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 20:08:45 -- nvmf/common.sh@469 -- # nvmfpid=1432926 00:13:47.422 20:08:45 -- nvmf/common.sh@470 -- # waitforlisten 1432926 00:13:47.422 20:08:45 -- common/autotest_common.sh@819 -- # '[' -z 1432926 ']' 00:13:47.422 20:08:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.422 20:08:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:47.422 20:08:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.422 20:08:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:47.422 20:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:47.422 20:08:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.422 [2024-04-25 20:08:45.289048] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:47.422 [2024-04-25 20:08:45.289168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.682 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.682 [2024-04-25 20:08:45.415205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.682 [2024-04-25 20:08:45.514636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:47.682 [2024-04-25 20:08:45.514826] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.682 [2024-04-25 20:08:45.514840] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.682 [2024-04-25 20:08:45.514850] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.682 [2024-04-25 20:08:45.515009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.682 [2024-04-25 20:08:45.515139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.682 [2024-04-25 20:08:45.515249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.682 [2024-04-25 20:08:45.515261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.252 20:08:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:48.252 20:08:45 -- common/autotest_common.sh@852 -- # return 0 00:13:48.252 20:08:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:48.252 20:08:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:48.252 20:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:48.252 20:08:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.252 20:08:46 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:48.252 20:08:46 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:48.252 20:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.252 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.252 [2024-04-25 20:08:46.039750] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.252 20:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.252 20:08:46 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:48.252 20:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.252 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.511 Malloc1 00:13:48.511 20:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.511 20:08:46 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:48.511 20:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.511 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.511 20:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.511 20:08:46 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.511 20:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.511 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.511 20:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.511 20:08:46 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.511 20:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.511 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.511 [2024-04-25 20:08:46.319171] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.511 20:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.511 20:08:46 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:48.511 20:08:46 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:13:48.511 20:08:46 -- common/autotest_common.sh@1358 -- # local bdev_info 00:13:48.511 20:08:46 -- common/autotest_common.sh@1359 -- # local bs 00:13:48.511 20:08:46 -- common/autotest_common.sh@1360 -- # local nb 00:13:48.511 20:08:46 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:48.511 20:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.511 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.511 20:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.511 20:08:46 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:13:48.511 { 00:13:48.511 "name": "Malloc1", 00:13:48.511 "aliases": [ 00:13:48.511 "2648d641-a1ea-4aa0-93f6-f373e4bc8691" 00:13:48.511 ], 00:13:48.511 "product_name": "Malloc disk", 00:13:48.511 "block_size": 512, 00:13:48.511 "num_blocks": 1048576, 00:13:48.511 "uuid": "2648d641-a1ea-4aa0-93f6-f373e4bc8691", 00:13:48.511 "assigned_rate_limits": { 00:13:48.511 "rw_ios_per_sec": 0, 00:13:48.511 "rw_mbytes_per_sec": 0, 00:13:48.511 "r_mbytes_per_sec": 0, 00:13:48.511 "w_mbytes_per_sec": 0 00:13:48.511 }, 00:13:48.511 "claimed": true, 00:13:48.511 "claim_type": "exclusive_write", 00:13:48.511 "zoned": false, 00:13:48.511 "supported_io_types": { 00:13:48.511 "read": true, 00:13:48.511 "write": true, 00:13:48.511 "unmap": true, 00:13:48.511 "write_zeroes": true, 00:13:48.511 "flush": true, 00:13:48.511 "reset": true, 00:13:48.511 "compare": false, 00:13:48.511 "compare_and_write": false, 00:13:48.511 "abort": true, 00:13:48.511 "nvme_admin": false, 00:13:48.511 "nvme_io": false 00:13:48.511 }, 00:13:48.511 "memory_domains": [ 00:13:48.511 { 00:13:48.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.511 "dma_device_type": 2 00:13:48.511 } 00:13:48.511 ], 00:13:48.511 "driver_specific": {} 00:13:48.511 } 00:13:48.511 ]' 00:13:48.511 20:08:46 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:13:48.511 20:08:46 -- common/autotest_common.sh@1362 -- # bs=512 00:13:48.511 20:08:46 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:13:48.511 20:08:46 -- common/autotest_common.sh@1363 -- # nb=1048576 00:13:48.511 20:08:46 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:13:48.511 20:08:46 -- common/autotest_common.sh@1367 -- # echo 512 00:13:48.511 20:08:46 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:48.511 20:08:46 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.894 20:08:47 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.894 20:08:47 -- common/autotest_common.sh@1177 -- # local i=0 00:13:49.894 20:08:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.894 20:08:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:49.894 20:08:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:52.434 20:08:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:52.434 20:08:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:52.434 20:08:49 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.434 20:08:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:52.434 20:08:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.434 20:08:49 -- common/autotest_common.sh@1187 -- # return 0 00:13:52.434 20:08:49 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:52.434 20:08:49 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:52.434 20:08:49 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:52.434 20:08:49 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:52.434 20:08:49 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:52.434 20:08:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:52.434 20:08:49 -- setup/common.sh@80 -- # echo 536870912 00:13:52.434 20:08:49 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:52.434 20:08:49 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:52.434 20:08:49 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:52.434 20:08:49 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:52.434 20:08:50 -- target/filesystem.sh@69 -- # partprobe 00:13:52.434 20:08:50 -- target/filesystem.sh@70 -- # sleep 1 00:13:53.813 20:08:51 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:53.813 20:08:51 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:53.813 20:08:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:53.813 20:08:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:53.813 20:08:51 -- common/autotest_common.sh@10 -- # set +x 00:13:53.813 ************************************ 00:13:53.813 START TEST filesystem_in_capsule_ext4 00:13:53.813 ************************************ 00:13:53.813 20:08:51 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:53.813 20:08:51 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:53.813 20:08:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:53.813 20:08:51 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:53.813 20:08:51 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:13:53.813 20:08:51 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:53.813 20:08:51 -- common/autotest_common.sh@904 -- # local i=0 00:13:53.813 20:08:51 -- common/autotest_common.sh@905 -- # local force 00:13:53.813 20:08:51 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:13:53.813 20:08:51 -- common/autotest_common.sh@908 -- # force=-F 00:13:53.813 20:08:51 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:53.813 mke2fs 1.46.5 (30-Dec-2021) 00:13:53.813 Discarding device blocks: 0/522240 done 00:13:53.813 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:53.813 Filesystem UUID: ae39254d-d293-461c-b257-bab68fa6521c 00:13:53.813 Superblock backups stored on blocks: 00:13:53.813 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:53.813 00:13:53.813 Allocating group tables: 0/64 done 00:13:53.813 Writing inode tables: 0/64 done 00:13:55.193 Creating journal (8192 blocks): done 00:13:55.761 Writing superblocks and filesystem accounting information: 0/64 done 00:13:55.761 00:13:55.761 20:08:53 -- common/autotest_common.sh@921 -- # return 0 00:13:55.761 20:08:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:55.761 20:08:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:55.761 20:08:53 -- target/filesystem.sh@25 -- # sync 00:13:55.761 20:08:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:55.761 20:08:53 -- target/filesystem.sh@27 -- # sync 00:13:55.761 20:08:53 -- target/filesystem.sh@29 -- # i=0 00:13:55.761 20:08:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:55.761 20:08:53 -- target/filesystem.sh@37 -- # kill -0 1432926 00:13:55.761 20:08:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:55.761 20:08:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:55.761 20:08:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:55.761 20:08:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:55.761 00:13:55.761 real 0m2.326s 00:13:55.761 user 0m0.029s 00:13:55.761 sys 0m0.037s 00:13:55.761 20:08:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.761 20:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:55.761 ************************************ 00:13:55.761 END TEST filesystem_in_capsule_ext4 00:13:55.761 ************************************ 00:13:56.022 20:08:53 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:56.022 20:08:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:56.022 20:08:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.022 20:08:53 -- common/autotest_common.sh@10 -- # set +x 00:13:56.022 ************************************ 00:13:56.022 START TEST filesystem_in_capsule_btrfs 00:13:56.022 ************************************ 00:13:56.022 20:08:53 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:56.022 20:08:53 -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:56.022 20:08:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:56.022 20:08:53 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:56.022 20:08:53 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:13:56.022 20:08:53 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:56.022 20:08:53 -- common/autotest_common.sh@904 -- # local i=0 00:13:56.022 20:08:53 -- common/autotest_common.sh@905 -- # local force 00:13:56.022 20:08:53 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:13:56.022 20:08:53 -- common/autotest_common.sh@910 -- # force=-f 00:13:56.022 20:08:53 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:56.022 btrfs-progs v6.6.2 00:13:56.022 See https://btrfs.readthedocs.io for more information. 00:13:56.022 00:13:56.022 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:56.022 NOTE: several default settings have changed in version 5.15, please make sure 00:13:56.022 this does not affect your deployments: 00:13:56.022 - DUP for metadata (-m dup) 00:13:56.022 - enabled no-holes (-O no-holes) 00:13:56.022 - enabled free-space-tree (-R free-space-tree) 00:13:56.022 00:13:56.022 Label: (null) 00:13:56.022 UUID: 8603113b-45e7-4a91-a1ac-74c3e1198b1a 00:13:56.022 Node size: 16384 00:13:56.022 Sector size: 4096 00:13:56.022 Filesystem size: 510.00MiB 00:13:56.022 Block group profiles: 00:13:56.022 Data: single 8.00MiB 00:13:56.022 Metadata: DUP 32.00MiB 00:13:56.022 System: DUP 8.00MiB 00:13:56.022 SSD detected: yes 00:13:56.022 Zoned device: no 00:13:56.022 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:56.022 Runtime features: free-space-tree 00:13:56.022 Checksum: crc32c 00:13:56.022 Number of devices: 1 00:13:56.022 Devices: 00:13:56.022 ID SIZE PATH 00:13:56.022 1 510.00MiB /dev/nvme0n1p1 00:13:56.022 00:13:56.022 20:08:53 -- common/autotest_common.sh@921 -- # return 0 00:13:56.022 20:08:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:56.590 20:08:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:56.590 20:08:54 -- target/filesystem.sh@25 -- # sync 00:13:56.590 20:08:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:56.590 20:08:54 -- target/filesystem.sh@27 -- # sync 00:13:56.590 20:08:54 -- target/filesystem.sh@29 -- # i=0 00:13:56.590 20:08:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:56.850 20:08:54 -- target/filesystem.sh@37 -- # kill -0 1432926 00:13:56.851 20:08:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:56.851 20:08:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:56.851 20:08:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:56.851 20:08:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:56.851 00:13:56.851 real 0m0.812s 00:13:56.851 user 0m0.018s 00:13:56.851 sys 0m0.051s 00:13:56.851 20:08:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.851 20:08:54 -- common/autotest_common.sh@10 -- # set +x 00:13:56.851 ************************************ 00:13:56.851 END TEST filesystem_in_capsule_btrfs 00:13:56.851 ************************************ 00:13:56.851 20:08:54 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:56.851 20:08:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:56.851 20:08:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.851 20:08:54 -- common/autotest_common.sh@10 -- # set +x 00:13:56.851 ************************************ 00:13:56.851 START TEST filesystem_in_capsule_xfs 00:13:56.851 ************************************ 00:13:56.851 20:08:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:13:56.851 20:08:54 -- target/filesystem.sh@18 -- # fstype=xfs 00:13:56.851 20:08:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:56.851 20:08:54 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:56.851 20:08:54 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:13:56.851 20:08:54 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:56.851 20:08:54 -- common/autotest_common.sh@904 -- # local i=0 00:13:56.851 20:08:54 -- common/autotest_common.sh@905 -- # local force 00:13:56.851 20:08:54 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:13:56.851 20:08:54 -- common/autotest_common.sh@910 -- # force=-f 00:13:56.851 20:08:54 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:56.851 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:56.851 = sectsz=512 attr=2, projid32bit=1 00:13:56.851 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:56.851 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:56.851 data = bsize=4096 blocks=130560, imaxpct=25 00:13:56.851 = sunit=0 swidth=0 blks 00:13:56.851 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:56.851 log =internal log bsize=4096 blocks=16384, version=2 00:13:56.851 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:56.851 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:57.791 Discarding blocks...Done. 00:13:57.791 20:08:55 -- common/autotest_common.sh@921 -- # return 0 00:13:57.791 20:08:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:59.695 20:08:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:59.695 20:08:57 -- target/filesystem.sh@25 -- # sync 00:13:59.695 20:08:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:59.695 20:08:57 -- target/filesystem.sh@27 -- # sync 00:13:59.695 20:08:57 -- target/filesystem.sh@29 -- # i=0 00:13:59.695 20:08:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:59.695 20:08:57 -- target/filesystem.sh@37 -- # kill -0 1432926 00:13:59.695 20:08:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:59.695 20:08:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:59.695 20:08:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:59.695 20:08:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:59.955 00:13:59.955 real 0m3.054s 00:13:59.955 user 0m0.015s 00:13:59.955 sys 0m0.050s 00:13:59.955 20:08:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.955 20:08:57 -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 ************************************ 00:13:59.955 END TEST filesystem_in_capsule_xfs 00:13:59.955 ************************************ 00:13:59.955 20:08:57 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:00.216 20:08:57 -- target/filesystem.sh@93 -- # sync 00:14:00.216 20:08:57 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.216 20:08:58 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:00.216 20:08:58 -- common/autotest_common.sh@1198 -- # local i=0 00:14:00.216 20:08:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:00.216 20:08:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.216 20:08:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:00.216 20:08:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.216 20:08:58 -- common/autotest_common.sh@1210 -- # return 0 00:14:00.216 20:08:58 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.216 20:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.216 20:08:58 -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 20:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.216 20:08:58 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:00.216 20:08:58 -- target/filesystem.sh@101 -- # killprocess 1432926 00:14:00.216 20:08:58 -- common/autotest_common.sh@926 -- # '[' -z 1432926 ']' 00:14:00.216 20:08:58 -- common/autotest_common.sh@930 -- # kill -0 1432926 00:14:00.216 20:08:58 -- common/autotest_common.sh@931 -- # uname 00:14:00.475 20:08:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:00.475 20:08:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1432926 00:14:00.475 20:08:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:00.475 20:08:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:00.475 20:08:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1432926' 00:14:00.475 killing process with pid 1432926 00:14:00.475 20:08:58 -- common/autotest_common.sh@945 -- # kill 1432926 00:14:00.475 20:08:58 -- common/autotest_common.sh@950 -- # wait 1432926 00:14:01.412 20:08:59 -- target/filesystem.sh@102 -- # nvmfpid= 00:14:01.412 00:14:01.412 real 0m13.897s 00:14:01.412 user 0m53.656s 00:14:01.413 sys 0m1.027s 00:14:01.413 20:08:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.413 20:08:59 -- common/autotest_common.sh@10 -- # set +x 00:14:01.413 ************************************ 00:14:01.413 END TEST nvmf_filesystem_in_capsule 00:14:01.413 ************************************ 00:14:01.413 20:08:59 -- target/filesystem.sh@108 -- # nvmftestfini 00:14:01.413 20:08:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:01.413 20:08:59 -- nvmf/common.sh@116 -- # sync 00:14:01.413 20:08:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:01.413 20:08:59 -- nvmf/common.sh@119 -- # set +e 00:14:01.413 20:08:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:01.413 20:08:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:01.413 rmmod nvme_tcp 00:14:01.413 rmmod nvme_fabrics 00:14:01.413 rmmod nvme_keyring 00:14:01.413 20:08:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:01.413 20:08:59 -- nvmf/common.sh@123 -- # set -e 00:14:01.413 20:08:59 -- nvmf/common.sh@124 -- # return 0 00:14:01.413 20:08:59 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:14:01.413 20:08:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:01.413 20:08:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:01.413 20:08:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:01.413 20:08:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.413 20:08:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:01.413 20:08:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.413 20:08:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.413 20:08:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.323 20:09:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:03.323 00:14:03.323 real 0m34.470s 00:14:03.323 user 1m44.103s 00:14:03.323 sys 0m6.226s 00:14:03.323 20:09:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.323 20:09:01 -- common/autotest_common.sh@10 -- # set +x 00:14:03.323 ************************************ 00:14:03.323 END TEST nvmf_filesystem 00:14:03.323 ************************************ 00:14:03.584 20:09:01 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:03.584 20:09:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:03.584 20:09:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:03.584 20:09:01 -- common/autotest_common.sh@10 -- # set +x 00:14:03.584 ************************************ 00:14:03.584 START TEST nvmf_discovery 00:14:03.584 ************************************ 00:14:03.584 20:09:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:03.584 * Looking for test storage... 00:14:03.584 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:03.584 20:09:01 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.584 20:09:01 -- nvmf/common.sh@7 -- # uname -s 00:14:03.584 20:09:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.584 20:09:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.584 20:09:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.584 20:09:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.584 20:09:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.584 20:09:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.584 20:09:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.584 20:09:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.584 20:09:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.584 20:09:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.584 20:09:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:03.584 20:09:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:03.584 20:09:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.584 20:09:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.584 20:09:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:03.584 20:09:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:03.584 20:09:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.584 20:09:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.584 20:09:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.584 20:09:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.584 20:09:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.584 20:09:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.584 20:09:01 -- paths/export.sh@5 -- # export PATH 00:14:03.584 20:09:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.584 20:09:01 -- nvmf/common.sh@46 -- # : 0 00:14:03.584 20:09:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:03.584 20:09:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:03.584 20:09:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:03.584 20:09:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.584 20:09:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.584 20:09:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:03.584 20:09:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:03.584 20:09:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:03.584 20:09:01 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:03.584 20:09:01 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:03.584 20:09:01 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:03.584 20:09:01 -- target/discovery.sh@15 -- # hash nvme 00:14:03.584 20:09:01 -- target/discovery.sh@20 -- # nvmftestinit 00:14:03.584 20:09:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:03.584 20:09:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.584 20:09:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:03.585 20:09:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:03.585 20:09:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:03.585 20:09:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.585 20:09:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.585 20:09:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.585 20:09:01 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:03.585 20:09:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:03.585 20:09:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:03.585 20:09:01 -- common/autotest_common.sh@10 -- # set +x 00:14:10.198 20:09:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:10.198 20:09:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:10.198 20:09:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:10.198 20:09:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:10.198 20:09:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:10.198 20:09:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:10.198 20:09:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:10.198 20:09:07 -- nvmf/common.sh@294 -- # net_devs=() 00:14:10.198 20:09:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:10.198 20:09:07 -- nvmf/common.sh@295 -- # e810=() 00:14:10.198 20:09:07 -- nvmf/common.sh@295 -- # local -ga e810 00:14:10.198 20:09:07 -- nvmf/common.sh@296 -- # x722=() 00:14:10.198 20:09:07 -- nvmf/common.sh@296 -- # local -ga x722 00:14:10.198 20:09:07 -- nvmf/common.sh@297 -- # mlx=() 00:14:10.198 20:09:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:10.198 20:09:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.198 20:09:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:10.198 20:09:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:10.198 20:09:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:10.198 20:09:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:10.198 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:10.198 20:09:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:10.198 20:09:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:10.198 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:10.198 20:09:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:10.198 20:09:07 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:10.198 20:09:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.198 20:09:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:10.198 20:09:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.198 20:09:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:10.198 Found net devices under 0000:27:00.0: cvl_0_0 00:14:10.198 20:09:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.198 20:09:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:10.198 20:09:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.198 20:09:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:10.198 20:09:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.198 20:09:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:10.198 Found net devices under 0000:27:00.1: cvl_0_1 00:14:10.198 20:09:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.198 20:09:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:10.198 20:09:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:10.198 20:09:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:10.198 20:09:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.198 20:09:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.198 20:09:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.198 20:09:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:10.198 20:09:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.198 20:09:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.198 20:09:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:10.198 20:09:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.198 20:09:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.198 20:09:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:10.198 20:09:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:10.198 20:09:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.198 20:09:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.198 20:09:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.198 20:09:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.198 20:09:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:10.198 20:09:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.198 20:09:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.198 20:09:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.198 20:09:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:10.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:14:10.198 00:14:10.198 --- 10.0.0.2 ping statistics --- 00:14:10.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.198 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:14:10.198 20:09:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:14:10.198 00:14:10.198 --- 10.0.0.1 ping statistics --- 00:14:10.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.198 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:14:10.198 20:09:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.198 20:09:07 -- nvmf/common.sh@410 -- # return 0 00:14:10.198 20:09:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:10.198 20:09:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.198 20:09:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:10.198 20:09:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.198 20:09:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:10.198 20:09:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:10.198 20:09:07 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:10.198 20:09:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:10.198 20:09:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:10.198 20:09:07 -- common/autotest_common.sh@10 -- # set +x 00:14:10.198 20:09:07 -- nvmf/common.sh@469 -- # nvmfpid=1440285 00:14:10.198 20:09:07 -- nvmf/common.sh@470 -- # waitforlisten 1440285 00:14:10.198 20:09:07 -- common/autotest_common.sh@819 -- # '[' -z 1440285 ']' 00:14:10.198 20:09:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.198 20:09:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.198 20:09:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:10.198 20:09:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.199 20:09:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:10.199 20:09:07 -- common/autotest_common.sh@10 -- # set +x 00:14:10.199 [2024-04-25 20:09:07.487217] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:10.199 [2024-04-25 20:09:07.487325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.199 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.199 [2024-04-25 20:09:07.607239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.199 [2024-04-25 20:09:07.709180] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:10.199 [2024-04-25 20:09:07.709369] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.199 [2024-04-25 20:09:07.709384] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.199 [2024-04-25 20:09:07.709393] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.199 [2024-04-25 20:09:07.709468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.199 [2024-04-25 20:09:07.709501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.199 [2024-04-25 20:09:07.709520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.199 [2024-04-25 20:09:07.709527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.460 20:09:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:10.460 20:09:08 -- common/autotest_common.sh@852 -- # return 0 00:14:10.460 20:09:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:10.460 20:09:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:10.460 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.460 20:09:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.460 20:09:08 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.460 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.460 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.460 [2024-04-25 20:09:08.239841] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.460 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.460 20:09:08 -- target/discovery.sh@26 -- # seq 1 4 00:14:10.460 20:09:08 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:10.460 20:09:08 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 Null1 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 [2024-04-25 20:09:08.292118] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:10.461 20:09:08 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 Null2 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:10.461 20:09:08 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 Null3 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:10.461 20:09:08 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 Null4 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.461 20:09:08 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:10.461 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.461 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.721 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.721 20:09:08 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.721 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.721 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.721 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.721 20:09:08 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:10.721 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.721 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.721 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.721 20:09:08 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 4420 00:14:10.721 00:14:10.721 Discovery Log Number of Records 6, Generation counter 6 00:14:10.721 =====Discovery Log Entry 0====== 00:14:10.721 trtype: tcp 00:14:10.721 adrfam: ipv4 00:14:10.721 subtype: current discovery subsystem 00:14:10.721 treq: not required 00:14:10.721 portid: 0 00:14:10.721 trsvcid: 4420 00:14:10.721 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:10.721 traddr: 10.0.0.2 00:14:10.721 eflags: explicit discovery connections, duplicate discovery information 00:14:10.721 sectype: none 00:14:10.721 =====Discovery Log Entry 1====== 00:14:10.721 trtype: tcp 00:14:10.721 adrfam: ipv4 00:14:10.721 subtype: nvme subsystem 00:14:10.721 treq: not required 00:14:10.721 portid: 0 00:14:10.721 trsvcid: 4420 00:14:10.721 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:10.721 traddr: 10.0.0.2 00:14:10.721 eflags: none 00:14:10.721 sectype: none 00:14:10.721 =====Discovery Log Entry 2====== 00:14:10.721 trtype: tcp 00:14:10.721 adrfam: ipv4 00:14:10.721 subtype: nvme subsystem 00:14:10.721 treq: not required 00:14:10.721 portid: 0 00:14:10.721 trsvcid: 4420 00:14:10.721 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:10.721 traddr: 10.0.0.2 00:14:10.721 eflags: none 00:14:10.721 sectype: none 00:14:10.721 =====Discovery Log Entry 3====== 00:14:10.721 trtype: tcp 00:14:10.721 adrfam: ipv4 00:14:10.721 subtype: nvme subsystem 00:14:10.721 treq: not required 00:14:10.721 portid: 0 00:14:10.721 trsvcid: 4420 00:14:10.721 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:10.721 traddr: 10.0.0.2 00:14:10.721 eflags: none 00:14:10.721 sectype: none 00:14:10.721 =====Discovery Log Entry 4====== 00:14:10.721 trtype: tcp 00:14:10.721 adrfam: ipv4 00:14:10.721 subtype: nvme subsystem 00:14:10.721 treq: not required 00:14:10.721 portid: 0 00:14:10.721 trsvcid: 4420 00:14:10.721 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:10.721 traddr: 10.0.0.2 00:14:10.721 eflags: none 00:14:10.721 sectype: none 00:14:10.721 =====Discovery Log Entry 5====== 00:14:10.721 trtype: tcp 00:14:10.721 adrfam: ipv4 00:14:10.721 subtype: discovery subsystem referral 00:14:10.721 treq: not required 00:14:10.721 portid: 0 00:14:10.721 trsvcid: 4430 00:14:10.721 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:10.721 traddr: 10.0.0.2 00:14:10.721 eflags: none 00:14:10.721 sectype: none 00:14:10.722 20:09:08 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:10.722 Perform nvmf subsystem discovery via RPC 00:14:10.722 20:09:08 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:10.722 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.722 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.722 [2024-04-25 20:09:08.596378] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:10.722 [ 00:14:10.722 { 00:14:10.722 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:10.722 "subtype": "Discovery", 00:14:10.722 "listen_addresses": [ 00:14:10.722 { 00:14:10.722 "transport": "TCP", 00:14:10.722 "trtype": "TCP", 00:14:10.722 "adrfam": "IPv4", 00:14:10.722 "traddr": "10.0.0.2", 00:14:10.722 "trsvcid": "4420" 00:14:10.722 } 00:14:10.722 ], 00:14:10.722 "allow_any_host": true, 00:14:10.722 "hosts": [] 00:14:10.722 }, 00:14:10.722 { 00:14:10.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.722 "subtype": "NVMe", 00:14:10.722 "listen_addresses": [ 00:14:10.722 { 00:14:10.722 "transport": "TCP", 00:14:10.722 "trtype": "TCP", 00:14:10.722 "adrfam": "IPv4", 00:14:10.722 "traddr": "10.0.0.2", 00:14:10.722 "trsvcid": "4420" 00:14:10.722 } 00:14:10.722 ], 00:14:10.722 "allow_any_host": true, 00:14:10.722 "hosts": [], 00:14:10.722 "serial_number": "SPDK00000000000001", 00:14:10.722 "model_number": "SPDK bdev Controller", 00:14:10.722 "max_namespaces": 32, 00:14:10.722 "min_cntlid": 1, 00:14:10.722 "max_cntlid": 65519, 00:14:10.722 "namespaces": [ 00:14:10.722 { 00:14:10.722 "nsid": 1, 00:14:10.722 "bdev_name": "Null1", 00:14:10.722 "name": "Null1", 00:14:10.722 "nguid": "F51E90B409794A898677B6BE643BA1B0", 00:14:10.722 "uuid": "f51e90b4-0979-4a89-8677-b6be643ba1b0" 00:14:10.722 } 00:14:10.722 ] 00:14:10.722 }, 00:14:10.722 { 00:14:10.722 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:10.722 "subtype": "NVMe", 00:14:10.722 "listen_addresses": [ 00:14:10.722 { 00:14:10.722 "transport": "TCP", 00:14:10.722 "trtype": "TCP", 00:14:10.722 "adrfam": "IPv4", 00:14:10.722 "traddr": "10.0.0.2", 00:14:10.722 "trsvcid": "4420" 00:14:10.722 } 00:14:10.722 ], 00:14:10.722 "allow_any_host": true, 00:14:10.722 "hosts": [], 00:14:10.722 "serial_number": "SPDK00000000000002", 00:14:10.722 "model_number": "SPDK bdev Controller", 00:14:10.722 "max_namespaces": 32, 00:14:10.722 "min_cntlid": 1, 00:14:10.722 "max_cntlid": 65519, 00:14:10.722 "namespaces": [ 00:14:10.722 { 00:14:10.722 "nsid": 1, 00:14:10.722 "bdev_name": "Null2", 00:14:10.722 "name": "Null2", 00:14:10.722 "nguid": "C06A1A03FB0B43AFABE2BA8D9B2235E1", 00:14:10.722 "uuid": "c06a1a03-fb0b-43af-abe2-ba8d9b2235e1" 00:14:10.722 } 00:14:10.722 ] 00:14:10.722 }, 00:14:10.722 { 00:14:10.722 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:10.722 "subtype": "NVMe", 00:14:10.722 "listen_addresses": [ 00:14:10.722 { 00:14:10.722 "transport": "TCP", 00:14:10.722 "trtype": "TCP", 00:14:10.722 "adrfam": "IPv4", 00:14:10.722 "traddr": "10.0.0.2", 00:14:10.722 "trsvcid": "4420" 00:14:10.722 } 00:14:10.722 ], 00:14:10.722 "allow_any_host": true, 00:14:10.722 "hosts": [], 00:14:10.722 "serial_number": "SPDK00000000000003", 00:14:10.722 "model_number": "SPDK bdev Controller", 00:14:10.722 "max_namespaces": 32, 00:14:10.722 "min_cntlid": 1, 00:14:10.722 "max_cntlid": 65519, 00:14:10.722 "namespaces": [ 00:14:10.722 { 00:14:10.722 "nsid": 1, 00:14:10.722 "bdev_name": "Null3", 00:14:10.722 "name": "Null3", 00:14:10.722 "nguid": "7C356B3DF28D426698E50135E700E75F", 00:14:10.722 "uuid": "7c356b3d-f28d-4266-98e5-0135e700e75f" 00:14:10.722 } 00:14:10.722 ] 00:14:10.722 }, 00:14:10.722 { 00:14:10.722 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:10.722 "subtype": "NVMe", 00:14:10.722 "listen_addresses": [ 00:14:10.722 { 00:14:10.722 "transport": "TCP", 00:14:10.722 "trtype": "TCP", 00:14:10.722 "adrfam": "IPv4", 00:14:10.722 "traddr": "10.0.0.2", 00:14:10.722 "trsvcid": "4420" 00:14:10.722 } 00:14:10.722 ], 00:14:10.722 "allow_any_host": true, 00:14:10.722 "hosts": [], 00:14:10.722 "serial_number": "SPDK00000000000004", 00:14:10.722 "model_number": "SPDK bdev Controller", 00:14:10.722 "max_namespaces": 32, 00:14:10.722 "min_cntlid": 1, 00:14:10.722 "max_cntlid": 65519, 00:14:10.722 "namespaces": [ 00:14:10.722 { 00:14:10.722 "nsid": 1, 00:14:10.722 "bdev_name": "Null4", 00:14:10.722 "name": "Null4", 00:14:10.722 "nguid": "AC44D0E83EBB4DDDA53F87E5AF1E2CF5", 00:14:10.722 "uuid": "ac44d0e8-3ebb-4ddd-a53f-87e5af1e2cf5" 00:14:10.722 } 00:14:10.722 ] 00:14:10.722 } 00:14:10.722 ] 00:14:10.722 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.722 20:09:08 -- target/discovery.sh@42 -- # seq 1 4 00:14:10.722 20:09:08 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:10.722 20:09:08 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.722 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.722 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.722 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.722 20:09:08 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:10.722 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.722 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.722 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.722 20:09:08 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:10.722 20:09:08 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:10.722 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.722 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.723 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.723 20:09:08 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:10.723 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.723 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.723 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.723 20:09:08 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:10.723 20:09:08 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:10.723 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.723 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.982 20:09:08 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:10.982 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.982 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.982 20:09:08 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:10.982 20:09:08 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:10.982 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.982 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.982 20:09:08 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:10.982 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.982 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.982 20:09:08 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:10.982 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.982 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.982 20:09:08 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:10.982 20:09:08 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:10.982 20:09:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.982 20:09:08 -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 20:09:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.982 20:09:08 -- target/discovery.sh@49 -- # check_bdevs= 00:14:10.982 20:09:08 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:10.982 20:09:08 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:10.982 20:09:08 -- target/discovery.sh@57 -- # nvmftestfini 00:14:10.982 20:09:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:10.982 20:09:08 -- nvmf/common.sh@116 -- # sync 00:14:10.982 20:09:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:10.982 20:09:08 -- nvmf/common.sh@119 -- # set +e 00:14:10.982 20:09:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:10.982 20:09:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:10.982 rmmod nvme_tcp 00:14:10.982 rmmod nvme_fabrics 00:14:10.982 rmmod nvme_keyring 00:14:10.982 20:09:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:10.982 20:09:08 -- nvmf/common.sh@123 -- # set -e 00:14:10.982 20:09:08 -- nvmf/common.sh@124 -- # return 0 00:14:10.982 20:09:08 -- nvmf/common.sh@477 -- # '[' -n 1440285 ']' 00:14:10.982 20:09:08 -- nvmf/common.sh@478 -- # killprocess 1440285 00:14:10.982 20:09:08 -- common/autotest_common.sh@926 -- # '[' -z 1440285 ']' 00:14:10.982 20:09:08 -- common/autotest_common.sh@930 -- # kill -0 1440285 00:14:10.982 20:09:08 -- common/autotest_common.sh@931 -- # uname 00:14:10.982 20:09:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:10.982 20:09:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1440285 00:14:10.982 20:09:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:10.982 20:09:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:10.982 20:09:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1440285' 00:14:10.982 killing process with pid 1440285 00:14:10.982 20:09:08 -- common/autotest_common.sh@945 -- # kill 1440285 00:14:10.982 [2024-04-25 20:09:08.803310] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:10.982 20:09:08 -- common/autotest_common.sh@950 -- # wait 1440285 00:14:11.552 20:09:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:11.552 20:09:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:11.552 20:09:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:11.552 20:09:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.552 20:09:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:11.552 20:09:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.552 20:09:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.552 20:09:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.464 20:09:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:13.464 00:14:13.464 real 0m10.045s 00:14:13.464 user 0m7.439s 00:14:13.464 sys 0m4.857s 00:14:13.464 20:09:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.464 20:09:11 -- common/autotest_common.sh@10 -- # set +x 00:14:13.464 ************************************ 00:14:13.464 END TEST nvmf_discovery 00:14:13.464 ************************************ 00:14:13.464 20:09:11 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:13.464 20:09:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:13.464 20:09:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:13.464 20:09:11 -- common/autotest_common.sh@10 -- # set +x 00:14:13.464 ************************************ 00:14:13.464 START TEST nvmf_referrals 00:14:13.464 ************************************ 00:14:13.464 20:09:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:13.725 * Looking for test storage... 00:14:13.725 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:13.725 20:09:11 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.725 20:09:11 -- nvmf/common.sh@7 -- # uname -s 00:14:13.725 20:09:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.725 20:09:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.725 20:09:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.725 20:09:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.725 20:09:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.725 20:09:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.725 20:09:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.726 20:09:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.726 20:09:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.726 20:09:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.726 20:09:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:13.726 20:09:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:13.726 20:09:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.726 20:09:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.726 20:09:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:13.726 20:09:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:13.726 20:09:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.726 20:09:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.726 20:09:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.726 20:09:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.726 20:09:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.726 20:09:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.726 20:09:11 -- paths/export.sh@5 -- # export PATH 00:14:13.726 20:09:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.726 20:09:11 -- nvmf/common.sh@46 -- # : 0 00:14:13.726 20:09:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:13.726 20:09:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:13.726 20:09:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:13.726 20:09:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.726 20:09:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.726 20:09:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:13.726 20:09:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:13.726 20:09:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:13.726 20:09:11 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:13.726 20:09:11 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:13.726 20:09:11 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:13.726 20:09:11 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:13.726 20:09:11 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:13.726 20:09:11 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:13.726 20:09:11 -- target/referrals.sh@37 -- # nvmftestinit 00:14:13.726 20:09:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:13.726 20:09:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.726 20:09:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:13.726 20:09:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:13.726 20:09:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:13.726 20:09:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.726 20:09:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.726 20:09:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.726 20:09:11 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:13.726 20:09:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:13.726 20:09:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:13.726 20:09:11 -- common/autotest_common.sh@10 -- # set +x 00:14:19.011 20:09:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:19.011 20:09:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:19.011 20:09:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:19.011 20:09:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:19.011 20:09:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:19.011 20:09:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:19.011 20:09:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:19.011 20:09:16 -- nvmf/common.sh@294 -- # net_devs=() 00:14:19.011 20:09:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:19.011 20:09:16 -- nvmf/common.sh@295 -- # e810=() 00:14:19.011 20:09:16 -- nvmf/common.sh@295 -- # local -ga e810 00:14:19.011 20:09:16 -- nvmf/common.sh@296 -- # x722=() 00:14:19.011 20:09:16 -- nvmf/common.sh@296 -- # local -ga x722 00:14:19.011 20:09:16 -- nvmf/common.sh@297 -- # mlx=() 00:14:19.011 20:09:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:19.011 20:09:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.011 20:09:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:19.011 20:09:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:19.011 20:09:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.011 20:09:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:19.011 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:19.011 20:09:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.011 20:09:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:19.011 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:19.011 20:09:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:19.011 20:09:16 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.011 20:09:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.011 20:09:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.011 20:09:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.011 20:09:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:19.011 Found net devices under 0000:27:00.0: cvl_0_0 00:14:19.011 20:09:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.011 20:09:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.011 20:09:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.011 20:09:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.011 20:09:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.011 20:09:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:19.011 Found net devices under 0000:27:00.1: cvl_0_1 00:14:19.011 20:09:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.011 20:09:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:19.011 20:09:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:19.011 20:09:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:19.011 20:09:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:19.011 20:09:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.011 20:09:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.011 20:09:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.011 20:09:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:19.011 20:09:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.011 20:09:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.011 20:09:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:19.011 20:09:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.011 20:09:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.012 20:09:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:19.012 20:09:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:19.012 20:09:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.012 20:09:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.272 20:09:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.272 20:09:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.272 20:09:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:19.272 20:09:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.272 20:09:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.272 20:09:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.272 20:09:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:19.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.728 ms 00:14:19.272 00:14:19.272 --- 10.0.0.2 ping statistics --- 00:14:19.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.272 rtt min/avg/max/mdev = 0.728/0.728/0.728/0.000 ms 00:14:19.272 20:09:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:14:19.272 00:14:19.272 --- 10.0.0.1 ping statistics --- 00:14:19.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.272 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:14:19.272 20:09:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.272 20:09:17 -- nvmf/common.sh@410 -- # return 0 00:14:19.272 20:09:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:19.272 20:09:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.272 20:09:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:19.272 20:09:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:19.272 20:09:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.272 20:09:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:19.272 20:09:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:19.272 20:09:17 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:19.272 20:09:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.272 20:09:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:19.272 20:09:17 -- common/autotest_common.sh@10 -- # set +x 00:14:19.272 20:09:17 -- nvmf/common.sh@469 -- # nvmfpid=1444788 00:14:19.272 20:09:17 -- nvmf/common.sh@470 -- # waitforlisten 1444788 00:14:19.272 20:09:17 -- common/autotest_common.sh@819 -- # '[' -z 1444788 ']' 00:14:19.272 20:09:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.272 20:09:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:19.272 20:09:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.272 20:09:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:19.272 20:09:17 -- common/autotest_common.sh@10 -- # set +x 00:14:19.272 20:09:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:19.533 [2024-04-25 20:09:17.222051] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:19.533 [2024-04-25 20:09:17.222178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.533 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.533 [2024-04-25 20:09:17.358826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.533 [2024-04-25 20:09:17.454790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.533 [2024-04-25 20:09:17.454988] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.533 [2024-04-25 20:09:17.455003] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.533 [2024-04-25 20:09:17.455015] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.533 [2024-04-25 20:09:17.455181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.533 [2024-04-25 20:09:17.455277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.533 [2024-04-25 20:09:17.455376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.533 [2024-04-25 20:09:17.455389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.101 20:09:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:20.101 20:09:17 -- common/autotest_common.sh@852 -- # return 0 00:14:20.101 20:09:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:20.101 20:09:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:20.101 20:09:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.101 20:09:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.101 20:09:17 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.101 20:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.101 20:09:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.101 [2024-04-25 20:09:17.952760] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.101 20:09:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.101 20:09:17 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:20.101 20:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.101 20:09:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.101 [2024-04-25 20:09:17.964979] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:20.101 20:09:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.101 20:09:17 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:20.101 20:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.101 20:09:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.101 20:09:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.101 20:09:17 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:20.101 20:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.101 20:09:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.101 20:09:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.101 20:09:17 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:20.101 20:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.101 20:09:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.101 20:09:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.101 20:09:17 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:20.101 20:09:17 -- target/referrals.sh@48 -- # jq length 00:14:20.101 20:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.101 20:09:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.101 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.101 20:09:18 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:20.101 20:09:18 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:20.101 20:09:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:20.360 20:09:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:20.360 20:09:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:20.360 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.360 20:09:18 -- target/referrals.sh@21 -- # sort 00:14:20.360 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.360 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.360 20:09:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:20.360 20:09:18 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:20.360 20:09:18 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:20.360 20:09:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:20.360 20:09:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:20.360 20:09:18 -- target/referrals.sh@26 -- # sort 00:14:20.360 20:09:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.360 20:09:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:20.360 20:09:18 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:20.360 20:09:18 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:20.360 20:09:18 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:20.360 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.360 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.360 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.360 20:09:18 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:20.360 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.360 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.360 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.360 20:09:18 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:20.360 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.361 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.361 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.361 20:09:18 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:20.361 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.361 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.361 20:09:18 -- target/referrals.sh@56 -- # jq length 00:14:20.620 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.620 20:09:18 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:20.620 20:09:18 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:20.620 20:09:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:20.620 20:09:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:20.620 20:09:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.620 20:09:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:20.620 20:09:18 -- target/referrals.sh@26 -- # sort 00:14:20.620 20:09:18 -- target/referrals.sh@26 -- # echo 00:14:20.620 20:09:18 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:20.620 20:09:18 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:20.620 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.620 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.620 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.620 20:09:18 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:20.620 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.620 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.620 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.620 20:09:18 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:20.620 20:09:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:20.620 20:09:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:20.620 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.620 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.620 20:09:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:20.620 20:09:18 -- target/referrals.sh@21 -- # sort 00:14:20.621 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.621 20:09:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:20.621 20:09:18 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:20.621 20:09:18 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:20.621 20:09:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:20.621 20:09:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:20.621 20:09:18 -- target/referrals.sh@26 -- # sort 00:14:20.621 20:09:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.621 20:09:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:20.882 20:09:18 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:20.882 20:09:18 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:20.882 20:09:18 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:20.882 20:09:18 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:20.882 20:09:18 -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:20.882 20:09:18 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.882 20:09:18 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:20.882 20:09:18 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:20.882 20:09:18 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:20.882 20:09:18 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:20.882 20:09:18 -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:20.882 20:09:18 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.882 20:09:18 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:21.141 20:09:18 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:21.141 20:09:18 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:21.141 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.141 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:21.141 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.141 20:09:18 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:21.141 20:09:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:21.141 20:09:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:21.141 20:09:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:21.141 20:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.141 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:14:21.141 20:09:18 -- target/referrals.sh@21 -- # sort 00:14:21.141 20:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.141 20:09:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:21.141 20:09:18 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:21.141 20:09:18 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:21.141 20:09:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:21.141 20:09:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:21.141 20:09:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:21.141 20:09:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:21.141 20:09:18 -- target/referrals.sh@26 -- # sort 00:14:21.141 20:09:19 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:21.141 20:09:19 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:21.141 20:09:19 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:21.141 20:09:19 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:21.141 20:09:19 -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:21.141 20:09:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:21.141 20:09:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:21.399 20:09:19 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:21.399 20:09:19 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:21.399 20:09:19 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:21.399 20:09:19 -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:21.399 20:09:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:21.399 20:09:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:21.399 20:09:19 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:21.399 20:09:19 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:21.399 20:09:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.399 20:09:19 -- common/autotest_common.sh@10 -- # set +x 00:14:21.399 20:09:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.399 20:09:19 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:21.399 20:09:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.399 20:09:19 -- common/autotest_common.sh@10 -- # set +x 00:14:21.399 20:09:19 -- target/referrals.sh@82 -- # jq length 00:14:21.399 20:09:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.399 20:09:19 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:21.658 20:09:19 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:21.658 20:09:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:21.658 20:09:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:21.658 20:09:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:21.658 20:09:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:21.658 20:09:19 -- target/referrals.sh@26 -- # sort 00:14:21.658 20:09:19 -- target/referrals.sh@26 -- # echo 00:14:21.658 20:09:19 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:21.658 20:09:19 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:21.658 20:09:19 -- target/referrals.sh@86 -- # nvmftestfini 00:14:21.658 20:09:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:21.658 20:09:19 -- nvmf/common.sh@116 -- # sync 00:14:21.658 20:09:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:21.658 20:09:19 -- nvmf/common.sh@119 -- # set +e 00:14:21.658 20:09:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:21.658 20:09:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:21.658 rmmod nvme_tcp 00:14:21.658 rmmod nvme_fabrics 00:14:21.658 rmmod nvme_keyring 00:14:21.658 20:09:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:21.658 20:09:19 -- nvmf/common.sh@123 -- # set -e 00:14:21.658 20:09:19 -- nvmf/common.sh@124 -- # return 0 00:14:21.658 20:09:19 -- nvmf/common.sh@477 -- # '[' -n 1444788 ']' 00:14:21.658 20:09:19 -- nvmf/common.sh@478 -- # killprocess 1444788 00:14:21.658 20:09:19 -- common/autotest_common.sh@926 -- # '[' -z 1444788 ']' 00:14:21.658 20:09:19 -- common/autotest_common.sh@930 -- # kill -0 1444788 00:14:21.658 20:09:19 -- common/autotest_common.sh@931 -- # uname 00:14:21.658 20:09:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:21.658 20:09:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1444788 00:14:21.658 20:09:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:21.658 20:09:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:21.658 20:09:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1444788' 00:14:21.658 killing process with pid 1444788 00:14:21.658 20:09:19 -- common/autotest_common.sh@945 -- # kill 1444788 00:14:21.658 20:09:19 -- common/autotest_common.sh@950 -- # wait 1444788 00:14:22.226 20:09:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:22.226 20:09:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:22.226 20:09:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:22.226 20:09:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.226 20:09:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:22.226 20:09:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.226 20:09:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.226 20:09:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.140 20:09:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:24.140 00:14:24.140 real 0m10.671s 00:14:24.140 user 0m11.897s 00:14:24.140 sys 0m4.772s 00:14:24.140 20:09:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.140 20:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.140 ************************************ 00:14:24.140 END TEST nvmf_referrals 00:14:24.140 ************************************ 00:14:24.402 20:09:22 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:24.402 20:09:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:24.402 20:09:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:24.402 20:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.402 ************************************ 00:14:24.402 START TEST nvmf_connect_disconnect 00:14:24.402 ************************************ 00:14:24.402 20:09:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:24.402 * Looking for test storage... 00:14:24.402 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:24.402 20:09:22 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.402 20:09:22 -- nvmf/common.sh@7 -- # uname -s 00:14:24.402 20:09:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.402 20:09:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.402 20:09:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.402 20:09:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.402 20:09:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.402 20:09:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.402 20:09:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.402 20:09:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.402 20:09:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.402 20:09:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.402 20:09:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:24.402 20:09:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:14:24.402 20:09:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.402 20:09:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.402 20:09:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:24.402 20:09:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:24.402 20:09:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.402 20:09:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.402 20:09:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.402 20:09:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.402 20:09:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.402 20:09:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.402 20:09:22 -- paths/export.sh@5 -- # export PATH 00:14:24.402 20:09:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.402 20:09:22 -- nvmf/common.sh@46 -- # : 0 00:14:24.402 20:09:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:24.402 20:09:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:24.402 20:09:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:24.402 20:09:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.402 20:09:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.402 20:09:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:24.402 20:09:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:24.402 20:09:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:24.402 20:09:22 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.402 20:09:22 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.402 20:09:22 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:24.402 20:09:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:24.402 20:09:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.402 20:09:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:24.402 20:09:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:24.402 20:09:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:24.402 20:09:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.402 20:09:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.402 20:09:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.402 20:09:22 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:24.402 20:09:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:24.402 20:09:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:24.402 20:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:29.696 20:09:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:29.696 20:09:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:29.696 20:09:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:29.696 20:09:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:29.696 20:09:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:29.696 20:09:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:29.696 20:09:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:29.696 20:09:26 -- nvmf/common.sh@294 -- # net_devs=() 00:14:29.696 20:09:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:29.696 20:09:26 -- nvmf/common.sh@295 -- # e810=() 00:14:29.696 20:09:26 -- nvmf/common.sh@295 -- # local -ga e810 00:14:29.696 20:09:26 -- nvmf/common.sh@296 -- # x722=() 00:14:29.696 20:09:26 -- nvmf/common.sh@296 -- # local -ga x722 00:14:29.696 20:09:26 -- nvmf/common.sh@297 -- # mlx=() 00:14:29.696 20:09:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:29.696 20:09:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.696 20:09:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:29.696 20:09:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:29.696 20:09:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:29.696 20:09:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:29.696 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:29.696 20:09:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:29.696 20:09:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:29.696 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:29.696 20:09:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:29.696 20:09:26 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:29.696 20:09:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.696 20:09:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:29.696 20:09:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.696 20:09:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:29.696 Found net devices under 0000:27:00.0: cvl_0_0 00:14:29.696 20:09:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.696 20:09:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:29.696 20:09:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.696 20:09:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:29.696 20:09:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.696 20:09:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:29.696 Found net devices under 0000:27:00.1: cvl_0_1 00:14:29.696 20:09:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.696 20:09:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:29.696 20:09:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:29.696 20:09:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:29.696 20:09:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:29.696 20:09:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.696 20:09:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.696 20:09:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.696 20:09:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:29.696 20:09:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.696 20:09:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.696 20:09:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:29.696 20:09:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.696 20:09:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.696 20:09:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:29.696 20:09:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:29.697 20:09:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.697 20:09:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.697 20:09:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.697 20:09:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.697 20:09:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:29.697 20:09:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.697 20:09:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.697 20:09:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.697 20:09:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:29.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:14:29.697 00:14:29.697 --- 10.0.0.2 ping statistics --- 00:14:29.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.697 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:14:29.697 20:09:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:14:29.697 00:14:29.697 --- 10.0.0.1 ping statistics --- 00:14:29.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.697 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:29.697 20:09:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.697 20:09:27 -- nvmf/common.sh@410 -- # return 0 00:14:29.697 20:09:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:29.697 20:09:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.697 20:09:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:29.697 20:09:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:29.697 20:09:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.697 20:09:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:29.697 20:09:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:29.697 20:09:27 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:29.697 20:09:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:29.697 20:09:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:29.697 20:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.697 20:09:27 -- nvmf/common.sh@469 -- # nvmfpid=1449133 00:14:29.697 20:09:27 -- nvmf/common.sh@470 -- # waitforlisten 1449133 00:14:29.697 20:09:27 -- common/autotest_common.sh@819 -- # '[' -z 1449133 ']' 00:14:29.697 20:09:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.697 20:09:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:29.697 20:09:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.697 20:09:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:29.697 20:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:29.697 20:09:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.697 [2024-04-25 20:09:27.210032] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:29.697 [2024-04-25 20:09:27.210144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.697 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.697 [2024-04-25 20:09:27.337405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.697 [2024-04-25 20:09:27.436207] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:29.697 [2024-04-25 20:09:27.436384] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.697 [2024-04-25 20:09:27.436398] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.697 [2024-04-25 20:09:27.436408] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.697 [2024-04-25 20:09:27.436465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.697 [2024-04-25 20:09:27.436484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.697 [2024-04-25 20:09:27.436592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.697 [2024-04-25 20:09:27.436602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.267 20:09:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:30.267 20:09:27 -- common/autotest_common.sh@852 -- # return 0 00:14:30.267 20:09:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:30.267 20:09:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:30.267 20:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:30.267 20:09:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.267 20:09:27 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:30.267 20:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.267 20:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:30.267 [2024-04-25 20:09:27.937440] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.267 20:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.267 20:09:27 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:30.267 20:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.267 20:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:30.267 20:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.267 20:09:27 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:30.267 20:09:27 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:30.267 20:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.267 20:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:30.267 20:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.267 20:09:27 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:30.267 20:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.267 20:09:27 -- common/autotest_common.sh@10 -- # set +x 00:14:30.267 20:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.267 20:09:28 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.267 20:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.267 20:09:28 -- common/autotest_common.sh@10 -- # set +x 00:14:30.267 [2024-04-25 20:09:28.005606] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.267 20:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.268 20:09:28 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:14:30.268 20:09:28 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:14:30.268 20:09:28 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:14:30.268 20:09:28 -- target/connect_disconnect.sh@34 -- # set +x 00:14:32.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.141 20:13:17 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:20.141 20:13:17 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:20.141 20:13:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:20.141 20:13:17 -- nvmf/common.sh@116 -- # sync 00:18:20.141 20:13:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:20.141 20:13:17 -- nvmf/common.sh@119 -- # set +e 00:18:20.141 20:13:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:20.141 20:13:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:20.141 rmmod nvme_tcp 00:18:20.141 rmmod nvme_fabrics 00:18:20.141 rmmod nvme_keyring 00:18:20.141 20:13:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:20.141 20:13:17 -- nvmf/common.sh@123 -- # set -e 00:18:20.141 20:13:17 -- nvmf/common.sh@124 -- # return 0 00:18:20.141 20:13:17 -- nvmf/common.sh@477 -- # '[' -n 1449133 ']' 00:18:20.141 20:13:17 -- nvmf/common.sh@478 -- # killprocess 1449133 00:18:20.141 20:13:17 -- common/autotest_common.sh@926 -- # '[' -z 1449133 ']' 00:18:20.141 20:13:17 -- common/autotest_common.sh@930 -- # kill -0 1449133 00:18:20.141 20:13:17 -- common/autotest_common.sh@931 -- # uname 00:18:20.141 20:13:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:20.141 20:13:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1449133 00:18:20.141 20:13:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:20.141 20:13:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:20.141 20:13:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1449133' 00:18:20.141 killing process with pid 1449133 00:18:20.141 20:13:17 -- common/autotest_common.sh@945 -- # kill 1449133 00:18:20.141 20:13:17 -- common/autotest_common.sh@950 -- # wait 1449133 00:18:20.401 20:13:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:20.401 20:13:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:20.401 20:13:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:20.401 20:13:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.401 20:13:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:20.401 20:13:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.401 20:13:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.401 20:13:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.942 20:13:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:22.942 00:18:22.942 real 3m58.198s 00:18:22.942 user 15m17.386s 00:18:22.942 sys 0m13.081s 00:18:22.942 20:13:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.942 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.942 ************************************ 00:18:22.942 END TEST nvmf_connect_disconnect 00:18:22.942 ************************************ 00:18:22.942 20:13:20 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:22.942 20:13:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:22.942 20:13:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:22.942 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.942 ************************************ 00:18:22.942 START TEST nvmf_multitarget 00:18:22.942 ************************************ 00:18:22.942 20:13:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:22.942 * Looking for test storage... 00:18:22.942 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:22.942 20:13:20 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.942 20:13:20 -- nvmf/common.sh@7 -- # uname -s 00:18:22.942 20:13:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.942 20:13:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.942 20:13:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.942 20:13:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.942 20:13:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.942 20:13:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.942 20:13:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.942 20:13:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.942 20:13:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.942 20:13:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.942 20:13:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:22.942 20:13:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:22.942 20:13:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.942 20:13:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.942 20:13:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:22.942 20:13:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:22.942 20:13:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.942 20:13:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.942 20:13:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.942 20:13:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.942 20:13:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.942 20:13:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.942 20:13:20 -- paths/export.sh@5 -- # export PATH 00:18:22.942 20:13:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.942 20:13:20 -- nvmf/common.sh@46 -- # : 0 00:18:22.942 20:13:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:22.942 20:13:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:22.942 20:13:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:22.942 20:13:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.942 20:13:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.942 20:13:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:22.942 20:13:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:22.942 20:13:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:22.942 20:13:20 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:22.942 20:13:20 -- target/multitarget.sh@15 -- # nvmftestinit 00:18:22.942 20:13:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:22.942 20:13:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.942 20:13:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:22.942 20:13:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:22.942 20:13:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:22.942 20:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.942 20:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.942 20:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.942 20:13:20 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:22.942 20:13:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:22.942 20:13:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:22.942 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:28.250 20:13:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:28.250 20:13:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:28.250 20:13:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:28.250 20:13:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:28.250 20:13:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:28.250 20:13:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:28.250 20:13:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:28.250 20:13:25 -- nvmf/common.sh@294 -- # net_devs=() 00:18:28.250 20:13:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:28.250 20:13:25 -- nvmf/common.sh@295 -- # e810=() 00:18:28.250 20:13:25 -- nvmf/common.sh@295 -- # local -ga e810 00:18:28.250 20:13:25 -- nvmf/common.sh@296 -- # x722=() 00:18:28.250 20:13:25 -- nvmf/common.sh@296 -- # local -ga x722 00:18:28.250 20:13:25 -- nvmf/common.sh@297 -- # mlx=() 00:18:28.250 20:13:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:28.250 20:13:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.250 20:13:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:28.250 20:13:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:28.250 20:13:25 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:18:28.250 20:13:25 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:18:28.250 20:13:25 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:18:28.250 20:13:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:28.250 20:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:28.250 20:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:28.250 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:28.250 20:13:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:28.251 20:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:28.251 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:28.251 20:13:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:28.251 20:13:25 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:28.251 20:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.251 20:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:28.251 20:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.251 20:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:28.251 Found net devices under 0000:27:00.0: cvl_0_0 00:18:28.251 20:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.251 20:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:28.251 20:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.251 20:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:28.251 20:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.251 20:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:28.251 Found net devices under 0000:27:00.1: cvl_0_1 00:18:28.251 20:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.251 20:13:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:28.251 20:13:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:28.251 20:13:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:28.251 20:13:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.251 20:13:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.251 20:13:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.251 20:13:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:28.251 20:13:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.251 20:13:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.251 20:13:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:28.251 20:13:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.251 20:13:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.251 20:13:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:28.251 20:13:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:28.251 20:13:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.251 20:13:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.251 20:13:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.251 20:13:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.251 20:13:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:28.251 20:13:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.251 20:13:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.251 20:13:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.251 20:13:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:28.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:18:28.251 00:18:28.251 --- 10.0.0.2 ping statistics --- 00:18:28.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.251 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:18:28.251 20:13:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:18:28.251 00:18:28.251 --- 10.0.0.1 ping statistics --- 00:18:28.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.251 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:28.251 20:13:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.251 20:13:25 -- nvmf/common.sh@410 -- # return 0 00:18:28.251 20:13:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:28.251 20:13:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.251 20:13:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:28.251 20:13:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.251 20:13:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:28.251 20:13:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:28.251 20:13:25 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:28.251 20:13:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:28.251 20:13:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:28.251 20:13:25 -- common/autotest_common.sh@10 -- # set +x 00:18:28.251 20:13:25 -- nvmf/common.sh@469 -- # nvmfpid=1499343 00:18:28.251 20:13:25 -- nvmf/common.sh@470 -- # waitforlisten 1499343 00:18:28.251 20:13:25 -- common/autotest_common.sh@819 -- # '[' -z 1499343 ']' 00:18:28.251 20:13:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.251 20:13:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:28.251 20:13:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.251 20:13:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:28.251 20:13:25 -- common/autotest_common.sh@10 -- # set +x 00:18:28.251 20:13:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:28.251 [2024-04-25 20:13:25.800558] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:28.251 [2024-04-25 20:13:25.800661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.251 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.251 [2024-04-25 20:13:25.922276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.251 [2024-04-25 20:13:26.021565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:28.251 [2024-04-25 20:13:26.021741] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.251 [2024-04-25 20:13:26.021755] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.251 [2024-04-25 20:13:26.021764] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.251 [2024-04-25 20:13:26.021842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.251 [2024-04-25 20:13:26.021935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.251 [2024-04-25 20:13:26.022037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.251 [2024-04-25 20:13:26.022048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.821 20:13:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:28.821 20:13:26 -- common/autotest_common.sh@852 -- # return 0 00:18:28.821 20:13:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:28.821 20:13:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:28.821 20:13:26 -- common/autotest_common.sh@10 -- # set +x 00:18:28.821 20:13:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.821 20:13:26 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:28.821 20:13:26 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:28.821 20:13:26 -- target/multitarget.sh@21 -- # jq length 00:18:28.821 20:13:26 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:28.821 20:13:26 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:28.821 "nvmf_tgt_1" 00:18:28.821 20:13:26 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:29.081 "nvmf_tgt_2" 00:18:29.081 20:13:26 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:29.081 20:13:26 -- target/multitarget.sh@28 -- # jq length 00:18:29.081 20:13:26 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:29.081 20:13:26 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:29.081 true 00:18:29.081 20:13:26 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:29.340 true 00:18:29.340 20:13:27 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:29.340 20:13:27 -- target/multitarget.sh@35 -- # jq length 00:18:29.340 20:13:27 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:29.340 20:13:27 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:29.340 20:13:27 -- target/multitarget.sh@41 -- # nvmftestfini 00:18:29.340 20:13:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:29.340 20:13:27 -- nvmf/common.sh@116 -- # sync 00:18:29.340 20:13:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:29.340 20:13:27 -- nvmf/common.sh@119 -- # set +e 00:18:29.340 20:13:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:29.340 20:13:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:29.340 rmmod nvme_tcp 00:18:29.340 rmmod nvme_fabrics 00:18:29.340 rmmod nvme_keyring 00:18:29.340 20:13:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:29.340 20:13:27 -- nvmf/common.sh@123 -- # set -e 00:18:29.340 20:13:27 -- nvmf/common.sh@124 -- # return 0 00:18:29.340 20:13:27 -- nvmf/common.sh@477 -- # '[' -n 1499343 ']' 00:18:29.340 20:13:27 -- nvmf/common.sh@478 -- # killprocess 1499343 00:18:29.340 20:13:27 -- common/autotest_common.sh@926 -- # '[' -z 1499343 ']' 00:18:29.340 20:13:27 -- common/autotest_common.sh@930 -- # kill -0 1499343 00:18:29.340 20:13:27 -- common/autotest_common.sh@931 -- # uname 00:18:29.340 20:13:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:29.340 20:13:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1499343 00:18:29.599 20:13:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:29.599 20:13:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:29.599 20:13:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1499343' 00:18:29.599 killing process with pid 1499343 00:18:29.599 20:13:27 -- common/autotest_common.sh@945 -- # kill 1499343 00:18:29.599 20:13:27 -- common/autotest_common.sh@950 -- # wait 1499343 00:18:29.857 20:13:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:29.857 20:13:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:29.857 20:13:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:29.857 20:13:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.857 20:13:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:29.857 20:13:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.857 20:13:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.857 20:13:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.394 20:13:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:32.394 00:18:32.394 real 0m9.450s 00:18:32.394 user 0m8.576s 00:18:32.394 sys 0m4.357s 00:18:32.394 20:13:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.394 20:13:29 -- common/autotest_common.sh@10 -- # set +x 00:18:32.394 ************************************ 00:18:32.394 END TEST nvmf_multitarget 00:18:32.394 ************************************ 00:18:32.394 20:13:29 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:32.394 20:13:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:32.394 20:13:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:32.394 20:13:29 -- common/autotest_common.sh@10 -- # set +x 00:18:32.394 ************************************ 00:18:32.394 START TEST nvmf_rpc 00:18:32.394 ************************************ 00:18:32.394 20:13:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:32.394 * Looking for test storage... 00:18:32.394 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:32.394 20:13:29 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.394 20:13:29 -- nvmf/common.sh@7 -- # uname -s 00:18:32.394 20:13:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.394 20:13:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.394 20:13:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.394 20:13:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.394 20:13:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.394 20:13:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.394 20:13:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.394 20:13:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.394 20:13:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.394 20:13:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.394 20:13:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:32.394 20:13:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:32.394 20:13:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.394 20:13:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.394 20:13:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:32.394 20:13:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:32.394 20:13:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.394 20:13:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.394 20:13:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.394 20:13:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.394 20:13:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.394 20:13:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.394 20:13:29 -- paths/export.sh@5 -- # export PATH 00:18:32.394 20:13:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.394 20:13:29 -- nvmf/common.sh@46 -- # : 0 00:18:32.394 20:13:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:32.394 20:13:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:32.394 20:13:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:32.394 20:13:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.394 20:13:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.394 20:13:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:32.394 20:13:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:32.394 20:13:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:32.394 20:13:29 -- target/rpc.sh@11 -- # loops=5 00:18:32.394 20:13:29 -- target/rpc.sh@23 -- # nvmftestinit 00:18:32.394 20:13:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:32.394 20:13:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.394 20:13:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:32.394 20:13:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:32.394 20:13:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:32.394 20:13:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.394 20:13:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.394 20:13:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.394 20:13:29 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:32.394 20:13:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:32.394 20:13:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:32.394 20:13:29 -- common/autotest_common.sh@10 -- # set +x 00:18:37.670 20:13:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:37.670 20:13:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:37.670 20:13:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:37.670 20:13:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:37.670 20:13:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:37.670 20:13:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:37.670 20:13:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:37.670 20:13:34 -- nvmf/common.sh@294 -- # net_devs=() 00:18:37.670 20:13:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:37.670 20:13:34 -- nvmf/common.sh@295 -- # e810=() 00:18:37.670 20:13:34 -- nvmf/common.sh@295 -- # local -ga e810 00:18:37.670 20:13:34 -- nvmf/common.sh@296 -- # x722=() 00:18:37.670 20:13:34 -- nvmf/common.sh@296 -- # local -ga x722 00:18:37.670 20:13:34 -- nvmf/common.sh@297 -- # mlx=() 00:18:37.670 20:13:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:37.670 20:13:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.670 20:13:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.670 20:13:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.670 20:13:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.670 20:13:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.670 20:13:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.670 20:13:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.671 20:13:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.671 20:13:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.671 20:13:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.671 20:13:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.671 20:13:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:37.671 20:13:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:37.671 20:13:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:37.671 20:13:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:37.671 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:37.671 20:13:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:37.671 20:13:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:37.671 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:37.671 20:13:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:37.671 20:13:34 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:37.671 20:13:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.671 20:13:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:37.671 20:13:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.671 20:13:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:37.671 Found net devices under 0000:27:00.0: cvl_0_0 00:18:37.671 20:13:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.671 20:13:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:37.671 20:13:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.671 20:13:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:37.671 20:13:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.671 20:13:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:37.671 Found net devices under 0000:27:00.1: cvl_0_1 00:18:37.671 20:13:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.671 20:13:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:37.671 20:13:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:37.671 20:13:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:37.671 20:13:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:37.671 20:13:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.671 20:13:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.671 20:13:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.671 20:13:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:37.671 20:13:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.671 20:13:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.671 20:13:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:37.671 20:13:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.671 20:13:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.671 20:13:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:37.671 20:13:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:37.671 20:13:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.671 20:13:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.671 20:13:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.671 20:13:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.671 20:13:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:37.671 20:13:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.671 20:13:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.671 20:13:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.671 20:13:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:37.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:18:37.671 00:18:37.671 --- 10.0.0.2 ping statistics --- 00:18:37.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.671 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:18:37.671 20:13:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:18:37.671 00:18:37.671 --- 10.0.0.1 ping statistics --- 00:18:37.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.671 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:18:37.671 20:13:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.671 20:13:35 -- nvmf/common.sh@410 -- # return 0 00:18:37.671 20:13:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:37.671 20:13:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.671 20:13:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:37.671 20:13:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:37.671 20:13:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.671 20:13:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:37.671 20:13:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:37.671 20:13:35 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:37.671 20:13:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:37.671 20:13:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:37.671 20:13:35 -- common/autotest_common.sh@10 -- # set +x 00:18:37.671 20:13:35 -- nvmf/common.sh@469 -- # nvmfpid=1503584 00:18:37.671 20:13:35 -- nvmf/common.sh@470 -- # waitforlisten 1503584 00:18:37.671 20:13:35 -- common/autotest_common.sh@819 -- # '[' -z 1503584 ']' 00:18:37.671 20:13:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.671 20:13:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:37.671 20:13:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.671 20:13:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:37.671 20:13:35 -- common/autotest_common.sh@10 -- # set +x 00:18:37.671 20:13:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:37.671 [2024-04-25 20:13:35.152688] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:37.671 [2024-04-25 20:13:35.152792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.671 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.671 [2024-04-25 20:13:35.274911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.671 [2024-04-25 20:13:35.374387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:37.671 [2024-04-25 20:13:35.374590] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.671 [2024-04-25 20:13:35.374604] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.671 [2024-04-25 20:13:35.374614] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.671 [2024-04-25 20:13:35.374765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.671 [2024-04-25 20:13:35.374864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.671 [2024-04-25 20:13:35.374964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.671 [2024-04-25 20:13:35.374976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.932 20:13:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:37.932 20:13:35 -- common/autotest_common.sh@852 -- # return 0 00:18:37.932 20:13:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:38.190 20:13:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:38.190 20:13:35 -- common/autotest_common.sh@10 -- # set +x 00:18:38.190 20:13:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.190 20:13:35 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:38.190 20:13:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.190 20:13:35 -- common/autotest_common.sh@10 -- # set +x 00:18:38.190 20:13:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.190 20:13:35 -- target/rpc.sh@26 -- # stats='{ 00:18:38.190 "tick_rate": 1900000000, 00:18:38.190 "poll_groups": [ 00:18:38.190 { 00:18:38.190 "name": "nvmf_tgt_poll_group_0", 00:18:38.190 "admin_qpairs": 0, 00:18:38.190 "io_qpairs": 0, 00:18:38.190 "current_admin_qpairs": 0, 00:18:38.190 "current_io_qpairs": 0, 00:18:38.190 "pending_bdev_io": 0, 00:18:38.190 "completed_nvme_io": 0, 00:18:38.190 "transports": [] 00:18:38.190 }, 00:18:38.190 { 00:18:38.190 "name": "nvmf_tgt_poll_group_1", 00:18:38.190 "admin_qpairs": 0, 00:18:38.190 "io_qpairs": 0, 00:18:38.190 "current_admin_qpairs": 0, 00:18:38.190 "current_io_qpairs": 0, 00:18:38.190 "pending_bdev_io": 0, 00:18:38.190 "completed_nvme_io": 0, 00:18:38.190 "transports": [] 00:18:38.190 }, 00:18:38.190 { 00:18:38.190 "name": "nvmf_tgt_poll_group_2", 00:18:38.190 "admin_qpairs": 0, 00:18:38.190 "io_qpairs": 0, 00:18:38.190 "current_admin_qpairs": 0, 00:18:38.190 "current_io_qpairs": 0, 00:18:38.190 "pending_bdev_io": 0, 00:18:38.190 "completed_nvme_io": 0, 00:18:38.190 "transports": [] 00:18:38.190 }, 00:18:38.190 { 00:18:38.190 "name": "nvmf_tgt_poll_group_3", 00:18:38.190 "admin_qpairs": 0, 00:18:38.190 "io_qpairs": 0, 00:18:38.190 "current_admin_qpairs": 0, 00:18:38.190 "current_io_qpairs": 0, 00:18:38.190 "pending_bdev_io": 0, 00:18:38.190 "completed_nvme_io": 0, 00:18:38.190 "transports": [] 00:18:38.190 } 00:18:38.190 ] 00:18:38.190 }' 00:18:38.190 20:13:35 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:38.190 20:13:35 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:38.190 20:13:35 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:38.190 20:13:35 -- target/rpc.sh@15 -- # wc -l 00:18:38.190 20:13:35 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:38.190 20:13:35 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:38.190 20:13:35 -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:38.190 20:13:35 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.190 20:13:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.190 20:13:35 -- common/autotest_common.sh@10 -- # set +x 00:18:38.190 [2024-04-25 20:13:36.000088] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.190 20:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.190 20:13:36 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:38.190 20:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.190 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.190 20:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.190 20:13:36 -- target/rpc.sh@33 -- # stats='{ 00:18:38.190 "tick_rate": 1900000000, 00:18:38.190 "poll_groups": [ 00:18:38.190 { 00:18:38.190 "name": "nvmf_tgt_poll_group_0", 00:18:38.190 "admin_qpairs": 0, 00:18:38.190 "io_qpairs": 0, 00:18:38.190 "current_admin_qpairs": 0, 00:18:38.190 "current_io_qpairs": 0, 00:18:38.190 "pending_bdev_io": 0, 00:18:38.190 "completed_nvme_io": 0, 00:18:38.190 "transports": [ 00:18:38.190 { 00:18:38.190 "trtype": "TCP" 00:18:38.190 } 00:18:38.190 ] 00:18:38.190 }, 00:18:38.190 { 00:18:38.190 "name": "nvmf_tgt_poll_group_1", 00:18:38.190 "admin_qpairs": 0, 00:18:38.190 "io_qpairs": 0, 00:18:38.190 "current_admin_qpairs": 0, 00:18:38.190 "current_io_qpairs": 0, 00:18:38.190 "pending_bdev_io": 0, 00:18:38.190 "completed_nvme_io": 0, 00:18:38.190 "transports": [ 00:18:38.190 { 00:18:38.190 "trtype": "TCP" 00:18:38.190 } 00:18:38.190 ] 00:18:38.190 }, 00:18:38.190 { 00:18:38.190 "name": "nvmf_tgt_poll_group_2", 00:18:38.190 "admin_qpairs": 0, 00:18:38.190 "io_qpairs": 0, 00:18:38.190 "current_admin_qpairs": 0, 00:18:38.190 "current_io_qpairs": 0, 00:18:38.190 "pending_bdev_io": 0, 00:18:38.190 "completed_nvme_io": 0, 00:18:38.190 "transports": [ 00:18:38.190 { 00:18:38.190 "trtype": "TCP" 00:18:38.190 } 00:18:38.190 ] 00:18:38.190 }, 00:18:38.190 { 00:18:38.190 "name": "nvmf_tgt_poll_group_3", 00:18:38.190 "admin_qpairs": 0, 00:18:38.190 "io_qpairs": 0, 00:18:38.190 "current_admin_qpairs": 0, 00:18:38.190 "current_io_qpairs": 0, 00:18:38.190 "pending_bdev_io": 0, 00:18:38.190 "completed_nvme_io": 0, 00:18:38.190 "transports": [ 00:18:38.190 { 00:18:38.190 "trtype": "TCP" 00:18:38.190 } 00:18:38.190 ] 00:18:38.190 } 00:18:38.190 ] 00:18:38.190 }' 00:18:38.190 20:13:36 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:38.190 20:13:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:38.190 20:13:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:38.190 20:13:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:38.190 20:13:36 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:38.190 20:13:36 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:38.190 20:13:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:38.190 20:13:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:38.190 20:13:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:38.190 20:13:36 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:38.190 20:13:36 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:38.190 20:13:36 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:38.190 20:13:36 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:38.190 20:13:36 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:38.190 20:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.190 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.448 Malloc1 00:18:38.448 20:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.448 20:13:36 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:38.448 20:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.448 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.448 20:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.448 20:13:36 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:38.448 20:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.448 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.448 20:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.448 20:13:36 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:38.448 20:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.448 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.448 20:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.448 20:13:36 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.448 20:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.448 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.448 [2024-04-25 20:13:36.165899] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.448 20:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.448 20:13:36 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:18:38.448 20:13:36 -- common/autotest_common.sh@640 -- # local es=0 00:18:38.448 20:13:36 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:18:38.448 20:13:36 -- common/autotest_common.sh@628 -- # local arg=nvme 00:18:38.448 20:13:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:38.448 20:13:36 -- common/autotest_common.sh@632 -- # type -t nvme 00:18:38.448 20:13:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:38.448 20:13:36 -- common/autotest_common.sh@634 -- # type -P nvme 00:18:38.448 20:13:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:38.448 20:13:36 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:18:38.448 20:13:36 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:18:38.448 20:13:36 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:18:38.448 [2024-04-25 20:13:36.194659] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:18:38.448 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:38.448 could not add new controller: failed to write to nvme-fabrics device 00:18:38.448 20:13:36 -- common/autotest_common.sh@643 -- # es=1 00:18:38.448 20:13:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:38.448 20:13:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:38.448 20:13:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:38.448 20:13:36 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:38.448 20:13:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.448 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.448 20:13:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.448 20:13:36 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:39.826 20:13:37 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:39.827 20:13:37 -- common/autotest_common.sh@1177 -- # local i=0 00:18:39.827 20:13:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:39.827 20:13:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:39.827 20:13:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:41.732 20:13:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:41.732 20:13:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:41.732 20:13:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:41.732 20:13:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:41.732 20:13:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:41.732 20:13:39 -- common/autotest_common.sh@1187 -- # return 0 00:18:41.732 20:13:39 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:41.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.991 20:13:39 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:41.991 20:13:39 -- common/autotest_common.sh@1198 -- # local i=0 00:18:41.991 20:13:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:41.991 20:13:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.991 20:13:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:41.991 20:13:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.991 20:13:39 -- common/autotest_common.sh@1210 -- # return 0 00:18:41.991 20:13:39 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:41.991 20:13:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:41.991 20:13:39 -- common/autotest_common.sh@10 -- # set +x 00:18:41.991 20:13:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:41.991 20:13:39 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.991 20:13:39 -- common/autotest_common.sh@640 -- # local es=0 00:18:41.991 20:13:39 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.991 20:13:39 -- common/autotest_common.sh@628 -- # local arg=nvme 00:18:41.991 20:13:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:41.991 20:13:39 -- common/autotest_common.sh@632 -- # type -t nvme 00:18:41.991 20:13:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:41.991 20:13:39 -- common/autotest_common.sh@634 -- # type -P nvme 00:18:41.991 20:13:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:41.991 20:13:39 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:18:41.991 20:13:39 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:18:41.991 20:13:39 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.991 [2024-04-25 20:13:39.854899] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:18:41.991 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:41.991 could not add new controller: failed to write to nvme-fabrics device 00:18:41.991 20:13:39 -- common/autotest_common.sh@643 -- # es=1 00:18:41.991 20:13:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:41.991 20:13:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:41.991 20:13:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:41.991 20:13:39 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:41.991 20:13:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:41.991 20:13:39 -- common/autotest_common.sh@10 -- # set +x 00:18:41.991 20:13:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:41.991 20:13:39 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:43.895 20:13:41 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:43.895 20:13:41 -- common/autotest_common.sh@1177 -- # local i=0 00:18:43.895 20:13:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.895 20:13:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:43.895 20:13:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:45.803 20:13:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:45.803 20:13:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:45.803 20:13:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:45.803 20:13:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:45.803 20:13:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.803 20:13:43 -- common/autotest_common.sh@1187 -- # return 0 00:18:45.803 20:13:43 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:45.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.803 20:13:43 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:45.803 20:13:43 -- common/autotest_common.sh@1198 -- # local i=0 00:18:45.803 20:13:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:45.803 20:13:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:45.803 20:13:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:45.803 20:13:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:45.803 20:13:43 -- common/autotest_common.sh@1210 -- # return 0 00:18:45.803 20:13:43 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:45.803 20:13:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.803 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:18:45.803 20:13:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.803 20:13:43 -- target/rpc.sh@81 -- # seq 1 5 00:18:45.803 20:13:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:45.803 20:13:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:45.803 20:13:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.803 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:18:45.803 20:13:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.803 20:13:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.803 20:13:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.803 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:18:45.803 [2024-04-25 20:13:43.545756] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.803 20:13:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.803 20:13:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:45.803 20:13:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.803 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:18:45.803 20:13:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.803 20:13:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:45.803 20:13:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.803 20:13:43 -- common/autotest_common.sh@10 -- # set +x 00:18:45.803 20:13:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.803 20:13:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.183 20:13:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:47.183 20:13:44 -- common/autotest_common.sh@1177 -- # local i=0 00:18:47.183 20:13:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.183 20:13:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:47.183 20:13:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:49.087 20:13:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:49.087 20:13:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:49.087 20:13:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.087 20:13:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:49.087 20:13:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.087 20:13:46 -- common/autotest_common.sh@1187 -- # return 0 00:18:49.087 20:13:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:49.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:49.346 20:13:47 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:49.346 20:13:47 -- common/autotest_common.sh@1198 -- # local i=0 00:18:49.346 20:13:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:49.346 20:13:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:49.346 20:13:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:49.346 20:13:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:49.346 20:13:47 -- common/autotest_common.sh@1210 -- # return 0 00:18:49.346 20:13:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:49.346 20:13:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.346 20:13:47 -- common/autotest_common.sh@10 -- # set +x 00:18:49.346 20:13:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.346 20:13:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.346 20:13:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.346 20:13:47 -- common/autotest_common.sh@10 -- # set +x 00:18:49.346 20:13:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.346 20:13:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:49.346 20:13:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:49.346 20:13:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.346 20:13:47 -- common/autotest_common.sh@10 -- # set +x 00:18:49.346 20:13:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.346 20:13:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.346 20:13:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.346 20:13:47 -- common/autotest_common.sh@10 -- # set +x 00:18:49.346 [2024-04-25 20:13:47.172258] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.346 20:13:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.346 20:13:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:49.347 20:13:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.347 20:13:47 -- common/autotest_common.sh@10 -- # set +x 00:18:49.347 20:13:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.347 20:13:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:49.347 20:13:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.347 20:13:47 -- common/autotest_common.sh@10 -- # set +x 00:18:49.347 20:13:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.347 20:13:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:50.727 20:13:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:50.727 20:13:48 -- common/autotest_common.sh@1177 -- # local i=0 00:18:50.727 20:13:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.727 20:13:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:50.727 20:13:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:52.806 20:13:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:52.806 20:13:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:52.806 20:13:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:52.806 20:13:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:52.806 20:13:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.806 20:13:50 -- common/autotest_common.sh@1187 -- # return 0 00:18:52.806 20:13:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.066 20:13:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:53.066 20:13:50 -- common/autotest_common.sh@1198 -- # local i=0 00:18:53.066 20:13:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:53.066 20:13:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.066 20:13:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:53.066 20:13:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.066 20:13:50 -- common/autotest_common.sh@1210 -- # return 0 00:18:53.066 20:13:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:53.066 20:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.066 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:18:53.066 20:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.066 20:13:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.066 20:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.066 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:18:53.066 20:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.066 20:13:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:53.066 20:13:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:53.066 20:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.066 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:18:53.066 20:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.066 20:13:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.066 20:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.066 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:18:53.066 [2024-04-25 20:13:50.812430] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.066 20:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.066 20:13:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:53.066 20:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.066 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:18:53.066 20:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.066 20:13:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:53.066 20:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.066 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:18:53.066 20:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.066 20:13:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:54.439 20:13:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:54.439 20:13:52 -- common/autotest_common.sh@1177 -- # local i=0 00:18:54.439 20:13:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.439 20:13:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:54.439 20:13:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:56.343 20:13:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:56.343 20:13:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:56.343 20:13:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:56.343 20:13:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:56.343 20:13:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.343 20:13:54 -- common/autotest_common.sh@1187 -- # return 0 00:18:56.343 20:13:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:56.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:56.603 20:13:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:56.603 20:13:54 -- common/autotest_common.sh@1198 -- # local i=0 00:18:56.875 20:13:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:56.875 20:13:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:56.876 20:13:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:56.876 20:13:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:56.876 20:13:54 -- common/autotest_common.sh@1210 -- # return 0 00:18:56.876 20:13:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:56.876 20:13:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.876 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:56.876 20:13:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.876 20:13:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.876 20:13:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.876 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:56.876 20:13:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.876 20:13:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:56.876 20:13:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:56.876 20:13:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.876 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:56.876 20:13:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.876 20:13:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.876 20:13:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.876 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:56.876 [2024-04-25 20:13:54.580150] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.876 20:13:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.876 20:13:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:56.876 20:13:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.876 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:56.876 20:13:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.876 20:13:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:56.876 20:13:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.876 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:56.876 20:13:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.876 20:13:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:58.250 20:13:56 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:58.250 20:13:56 -- common/autotest_common.sh@1177 -- # local i=0 00:18:58.250 20:13:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.250 20:13:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:58.250 20:13:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:00.782 20:13:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:00.782 20:13:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:00.782 20:13:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:00.782 20:13:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:00.782 20:13:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.782 20:13:58 -- common/autotest_common.sh@1187 -- # return 0 00:19:00.782 20:13:58 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:00.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:00.782 20:13:58 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:00.782 20:13:58 -- common/autotest_common.sh@1198 -- # local i=0 00:19:00.782 20:13:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:00.782 20:13:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:00.782 20:13:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:00.782 20:13:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:00.782 20:13:58 -- common/autotest_common.sh@1210 -- # return 0 00:19:00.782 20:13:58 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:00.782 20:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.782 20:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:00.782 20:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.782 20:13:58 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.782 20:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.782 20:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:00.782 20:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.782 20:13:58 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:00.782 20:13:58 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:00.782 20:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.782 20:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:00.782 20:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.782 20:13:58 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.782 20:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.782 20:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:00.782 [2024-04-25 20:13:58.302352] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.782 20:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.782 20:13:58 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:00.782 20:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.782 20:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:00.782 20:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.782 20:13:58 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:00.782 20:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.782 20:13:58 -- common/autotest_common.sh@10 -- # set +x 00:19:00.782 20:13:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.782 20:13:58 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.159 20:13:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:02.159 20:13:59 -- common/autotest_common.sh@1177 -- # local i=0 00:19:02.159 20:13:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.159 20:13:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:02.159 20:13:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:04.062 20:14:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:04.062 20:14:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:04.062 20:14:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.062 20:14:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:04.062 20:14:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.062 20:14:01 -- common/autotest_common.sh@1187 -- # return 0 00:19:04.062 20:14:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.062 20:14:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:04.062 20:14:01 -- common/autotest_common.sh@1198 -- # local i=0 00:19:04.062 20:14:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:04.062 20:14:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.062 20:14:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.062 20:14:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:04.062 20:14:01 -- common/autotest_common.sh@1210 -- # return 0 00:19:04.062 20:14:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:04.062 20:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.062 20:14:01 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.321 20:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:01 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@99 -- # seq 1 5 00:19:04.321 20:14:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:04.321 20:14:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 [2024-04-25 20:14:02.037862] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:04.321 20:14:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 [2024-04-25 20:14:02.085832] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:04.321 20:14:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 [2024-04-25 20:14:02.133852] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:04.321 20:14:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.321 20:14:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.321 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.321 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.321 [2024-04-25 20:14:02.181918] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.321 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.322 20:14:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.322 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.322 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.322 20:14:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.322 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.322 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.322 20:14:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.322 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.322 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.322 20:14:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.322 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.322 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.322 20:14:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:04.322 20:14:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.322 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.322 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.322 20:14:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.322 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.322 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 [2024-04-25 20:14:02.229968] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.322 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.322 20:14:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.322 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.322 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.322 20:14:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.322 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.322 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.322 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.322 20:14:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.322 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.322 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.580 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.580 20:14:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.580 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.581 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.581 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.581 20:14:02 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:04.581 20:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.581 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:04.581 20:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.581 20:14:02 -- target/rpc.sh@110 -- # stats='{ 00:19:04.581 "tick_rate": 1900000000, 00:19:04.581 "poll_groups": [ 00:19:04.581 { 00:19:04.581 "name": "nvmf_tgt_poll_group_0", 00:19:04.581 "admin_qpairs": 0, 00:19:04.581 "io_qpairs": 224, 00:19:04.581 "current_admin_qpairs": 0, 00:19:04.581 "current_io_qpairs": 0, 00:19:04.581 "pending_bdev_io": 0, 00:19:04.581 "completed_nvme_io": 471, 00:19:04.581 "transports": [ 00:19:04.581 { 00:19:04.581 "trtype": "TCP" 00:19:04.581 } 00:19:04.581 ] 00:19:04.581 }, 00:19:04.581 { 00:19:04.581 "name": "nvmf_tgt_poll_group_1", 00:19:04.581 "admin_qpairs": 1, 00:19:04.581 "io_qpairs": 223, 00:19:04.581 "current_admin_qpairs": 0, 00:19:04.581 "current_io_qpairs": 0, 00:19:04.581 "pending_bdev_io": 0, 00:19:04.581 "completed_nvme_io": 228, 00:19:04.581 "transports": [ 00:19:04.581 { 00:19:04.581 "trtype": "TCP" 00:19:04.581 } 00:19:04.581 ] 00:19:04.581 }, 00:19:04.581 { 00:19:04.581 "name": "nvmf_tgt_poll_group_2", 00:19:04.581 "admin_qpairs": 6, 00:19:04.581 "io_qpairs": 218, 00:19:04.581 "current_admin_qpairs": 0, 00:19:04.581 "current_io_qpairs": 0, 00:19:04.581 "pending_bdev_io": 0, 00:19:04.581 "completed_nvme_io": 218, 00:19:04.581 "transports": [ 00:19:04.581 { 00:19:04.581 "trtype": "TCP" 00:19:04.581 } 00:19:04.581 ] 00:19:04.581 }, 00:19:04.581 { 00:19:04.581 "name": "nvmf_tgt_poll_group_3", 00:19:04.581 "admin_qpairs": 0, 00:19:04.581 "io_qpairs": 224, 00:19:04.581 "current_admin_qpairs": 0, 00:19:04.581 "current_io_qpairs": 0, 00:19:04.581 "pending_bdev_io": 0, 00:19:04.581 "completed_nvme_io": 322, 00:19:04.581 "transports": [ 00:19:04.581 { 00:19:04.581 "trtype": "TCP" 00:19:04.581 } 00:19:04.581 ] 00:19:04.581 } 00:19:04.581 ] 00:19:04.581 }' 00:19:04.581 20:14:02 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:04.581 20:14:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:04.581 20:14:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:04.581 20:14:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:04.581 20:14:02 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:04.581 20:14:02 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:04.581 20:14:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:04.581 20:14:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:04.581 20:14:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:04.581 20:14:02 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:19:04.581 20:14:02 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:19:04.581 20:14:02 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:04.581 20:14:02 -- target/rpc.sh@123 -- # nvmftestfini 00:19:04.581 20:14:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:04.581 20:14:02 -- nvmf/common.sh@116 -- # sync 00:19:04.581 20:14:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:04.581 20:14:02 -- nvmf/common.sh@119 -- # set +e 00:19:04.581 20:14:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:04.581 20:14:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:04.581 rmmod nvme_tcp 00:19:04.581 rmmod nvme_fabrics 00:19:04.581 rmmod nvme_keyring 00:19:04.581 20:14:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:04.581 20:14:02 -- nvmf/common.sh@123 -- # set -e 00:19:04.581 20:14:02 -- nvmf/common.sh@124 -- # return 0 00:19:04.581 20:14:02 -- nvmf/common.sh@477 -- # '[' -n 1503584 ']' 00:19:04.581 20:14:02 -- nvmf/common.sh@478 -- # killprocess 1503584 00:19:04.581 20:14:02 -- common/autotest_common.sh@926 -- # '[' -z 1503584 ']' 00:19:04.581 20:14:02 -- common/autotest_common.sh@930 -- # kill -0 1503584 00:19:04.581 20:14:02 -- common/autotest_common.sh@931 -- # uname 00:19:04.581 20:14:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:04.581 20:14:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1503584 00:19:04.581 20:14:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:04.581 20:14:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:04.581 20:14:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1503584' 00:19:04.581 killing process with pid 1503584 00:19:04.581 20:14:02 -- common/autotest_common.sh@945 -- # kill 1503584 00:19:04.581 20:14:02 -- common/autotest_common.sh@950 -- # wait 1503584 00:19:05.148 20:14:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:05.148 20:14:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:05.148 20:14:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:05.148 20:14:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.148 20:14:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:05.148 20:14:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.148 20:14:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.148 20:14:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.685 20:14:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:07.685 00:19:07.685 real 0m35.217s 00:19:07.685 user 1m50.941s 00:19:07.685 sys 0m5.205s 00:19:07.685 20:14:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.685 20:14:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.685 ************************************ 00:19:07.685 END TEST nvmf_rpc 00:19:07.685 ************************************ 00:19:07.685 20:14:05 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:07.685 20:14:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:07.685 20:14:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:07.685 20:14:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.685 ************************************ 00:19:07.685 START TEST nvmf_invalid 00:19:07.685 ************************************ 00:19:07.685 20:14:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:07.685 * Looking for test storage... 00:19:07.685 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:07.685 20:14:05 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.685 20:14:05 -- nvmf/common.sh@7 -- # uname -s 00:19:07.686 20:14:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.686 20:14:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.686 20:14:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.686 20:14:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.686 20:14:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.686 20:14:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.686 20:14:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.686 20:14:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.686 20:14:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.686 20:14:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.686 20:14:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:07.686 20:14:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:07.686 20:14:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.686 20:14:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.686 20:14:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:07.686 20:14:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:07.686 20:14:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.686 20:14:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.686 20:14:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.686 20:14:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.686 20:14:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.686 20:14:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.686 20:14:05 -- paths/export.sh@5 -- # export PATH 00:19:07.686 20:14:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.686 20:14:05 -- nvmf/common.sh@46 -- # : 0 00:19:07.686 20:14:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:07.686 20:14:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:07.686 20:14:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:07.686 20:14:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.686 20:14:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.686 20:14:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:07.686 20:14:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:07.686 20:14:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:07.686 20:14:05 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:07.686 20:14:05 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:07.686 20:14:05 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:07.686 20:14:05 -- target/invalid.sh@14 -- # target=foobar 00:19:07.686 20:14:05 -- target/invalid.sh@16 -- # RANDOM=0 00:19:07.686 20:14:05 -- target/invalid.sh@34 -- # nvmftestinit 00:19:07.686 20:14:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:07.686 20:14:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.686 20:14:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:07.686 20:14:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:07.686 20:14:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:07.686 20:14:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.686 20:14:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.686 20:14:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.686 20:14:05 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:07.686 20:14:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:07.686 20:14:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:07.686 20:14:05 -- common/autotest_common.sh@10 -- # set +x 00:19:12.962 20:14:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:12.962 20:14:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:12.962 20:14:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:12.962 20:14:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:12.962 20:14:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:12.962 20:14:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:12.962 20:14:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:12.962 20:14:10 -- nvmf/common.sh@294 -- # net_devs=() 00:19:12.962 20:14:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:12.962 20:14:10 -- nvmf/common.sh@295 -- # e810=() 00:19:12.962 20:14:10 -- nvmf/common.sh@295 -- # local -ga e810 00:19:12.962 20:14:10 -- nvmf/common.sh@296 -- # x722=() 00:19:12.962 20:14:10 -- nvmf/common.sh@296 -- # local -ga x722 00:19:12.962 20:14:10 -- nvmf/common.sh@297 -- # mlx=() 00:19:12.962 20:14:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:12.962 20:14:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.962 20:14:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:12.962 20:14:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:12.962 20:14:10 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:12.962 20:14:10 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:12.962 20:14:10 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:12.962 20:14:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:12.963 20:14:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.963 20:14:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:12.963 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:12.963 20:14:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.963 20:14:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:12.963 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:12.963 20:14:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:12.963 20:14:10 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:12.963 20:14:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:12.963 20:14:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.963 20:14:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:12.963 20:14:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.223 20:14:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:13.223 Found net devices under 0000:27:00.0: cvl_0_0 00:19:13.223 20:14:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.223 20:14:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.223 20:14:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.223 20:14:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.223 20:14:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.223 20:14:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:13.223 Found net devices under 0000:27:00.1: cvl_0_1 00:19:13.223 20:14:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.223 20:14:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:13.223 20:14:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:13.223 20:14:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:13.223 20:14:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:13.223 20:14:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:13.223 20:14:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.223 20:14:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.223 20:14:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:13.223 20:14:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:13.223 20:14:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:13.223 20:14:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:13.223 20:14:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:13.223 20:14:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:13.223 20:14:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.223 20:14:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:13.223 20:14:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:13.223 20:14:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:13.223 20:14:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:13.223 20:14:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:13.223 20:14:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:13.223 20:14:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:13.223 20:14:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:13.223 20:14:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:13.223 20:14:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:13.223 20:14:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:13.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:19:13.223 00:19:13.223 --- 10.0.0.2 ping statistics --- 00:19:13.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.223 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:19:13.223 20:14:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:13.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:19:13.484 00:19:13.484 --- 10.0.0.1 ping statistics --- 00:19:13.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.484 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:19:13.484 20:14:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.484 20:14:11 -- nvmf/common.sh@410 -- # return 0 00:19:13.484 20:14:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:13.484 20:14:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.484 20:14:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:13.484 20:14:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:13.484 20:14:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.484 20:14:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:13.484 20:14:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:13.484 20:14:11 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:13.484 20:14:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:13.484 20:14:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:13.484 20:14:11 -- common/autotest_common.sh@10 -- # set +x 00:19:13.484 20:14:11 -- nvmf/common.sh@469 -- # nvmfpid=1513036 00:19:13.484 20:14:11 -- nvmf/common.sh@470 -- # waitforlisten 1513036 00:19:13.484 20:14:11 -- common/autotest_common.sh@819 -- # '[' -z 1513036 ']' 00:19:13.484 20:14:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.484 20:14:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:13.484 20:14:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.484 20:14:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:13.484 20:14:11 -- common/autotest_common.sh@10 -- # set +x 00:19:13.484 20:14:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:13.484 [2024-04-25 20:14:11.260978] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:13.484 [2024-04-25 20:14:11.261084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.484 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.484 [2024-04-25 20:14:11.388160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:13.745 [2024-04-25 20:14:11.488426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:13.745 [2024-04-25 20:14:11.488615] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.745 [2024-04-25 20:14:11.488629] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.745 [2024-04-25 20:14:11.488638] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.745 [2024-04-25 20:14:11.488720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.745 [2024-04-25 20:14:11.488856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.745 [2024-04-25 20:14:11.488954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.745 [2024-04-25 20:14:11.488965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.316 20:14:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:14.316 20:14:11 -- common/autotest_common.sh@852 -- # return 0 00:19:14.316 20:14:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:14.316 20:14:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:14.316 20:14:11 -- common/autotest_common.sh@10 -- # set +x 00:19:14.316 20:14:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.316 20:14:12 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:14.316 20:14:12 -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15948 00:19:14.316 [2024-04-25 20:14:12.141154] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:14.316 20:14:12 -- target/invalid.sh@40 -- # out='request: 00:19:14.316 { 00:19:14.316 "nqn": "nqn.2016-06.io.spdk:cnode15948", 00:19:14.316 "tgt_name": "foobar", 00:19:14.316 "method": "nvmf_create_subsystem", 00:19:14.316 "req_id": 1 00:19:14.316 } 00:19:14.316 Got JSON-RPC error response 00:19:14.316 response: 00:19:14.316 { 00:19:14.316 "code": -32603, 00:19:14.316 "message": "Unable to find target foobar" 00:19:14.316 }' 00:19:14.316 20:14:12 -- target/invalid.sh@41 -- # [[ request: 00:19:14.316 { 00:19:14.316 "nqn": "nqn.2016-06.io.spdk:cnode15948", 00:19:14.316 "tgt_name": "foobar", 00:19:14.316 "method": "nvmf_create_subsystem", 00:19:14.316 "req_id": 1 00:19:14.316 } 00:19:14.316 Got JSON-RPC error response 00:19:14.316 response: 00:19:14.316 { 00:19:14.316 "code": -32603, 00:19:14.316 "message": "Unable to find target foobar" 00:19:14.316 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:14.316 20:14:12 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:14.316 20:14:12 -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21782 00:19:14.577 [2024-04-25 20:14:12.301428] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21782: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:14.577 20:14:12 -- target/invalid.sh@45 -- # out='request: 00:19:14.577 { 00:19:14.577 "nqn": "nqn.2016-06.io.spdk:cnode21782", 00:19:14.577 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:14.577 "method": "nvmf_create_subsystem", 00:19:14.577 "req_id": 1 00:19:14.577 } 00:19:14.577 Got JSON-RPC error response 00:19:14.577 response: 00:19:14.577 { 00:19:14.577 "code": -32602, 00:19:14.577 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:14.577 }' 00:19:14.577 20:14:12 -- target/invalid.sh@46 -- # [[ request: 00:19:14.577 { 00:19:14.577 "nqn": "nqn.2016-06.io.spdk:cnode21782", 00:19:14.577 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:14.577 "method": "nvmf_create_subsystem", 00:19:14.577 "req_id": 1 00:19:14.577 } 00:19:14.577 Got JSON-RPC error response 00:19:14.577 response: 00:19:14.577 { 00:19:14.577 "code": -32602, 00:19:14.577 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:14.577 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:14.577 20:14:12 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:14.577 20:14:12 -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7025 00:19:14.577 [2024-04-25 20:14:12.469559] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7025: invalid model number 'SPDK_Controller' 00:19:14.577 20:14:12 -- target/invalid.sh@50 -- # out='request: 00:19:14.577 { 00:19:14.577 "nqn": "nqn.2016-06.io.spdk:cnode7025", 00:19:14.577 "model_number": "SPDK_Controller\u001f", 00:19:14.577 "method": "nvmf_create_subsystem", 00:19:14.577 "req_id": 1 00:19:14.577 } 00:19:14.577 Got JSON-RPC error response 00:19:14.577 response: 00:19:14.577 { 00:19:14.577 "code": -32602, 00:19:14.577 "message": "Invalid MN SPDK_Controller\u001f" 00:19:14.577 }' 00:19:14.577 20:14:12 -- target/invalid.sh@51 -- # [[ request: 00:19:14.577 { 00:19:14.577 "nqn": "nqn.2016-06.io.spdk:cnode7025", 00:19:14.577 "model_number": "SPDK_Controller\u001f", 00:19:14.577 "method": "nvmf_create_subsystem", 00:19:14.577 "req_id": 1 00:19:14.577 } 00:19:14.577 Got JSON-RPC error response 00:19:14.577 response: 00:19:14.577 { 00:19:14.577 "code": -32602, 00:19:14.577 "message": "Invalid MN SPDK_Controller\u001f" 00:19:14.577 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:14.577 20:14:12 -- target/invalid.sh@54 -- # gen_random_s 21 00:19:14.577 20:14:12 -- target/invalid.sh@19 -- # local length=21 ll 00:19:14.578 20:14:12 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:14.578 20:14:12 -- target/invalid.sh@21 -- # local chars 00:19:14.578 20:14:12 -- target/invalid.sh@22 -- # local string 00:19:14.578 20:14:12 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:14.578 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.578 20:14:12 -- target/invalid.sh@25 -- # printf %x 98 00:19:14.578 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x62' 00:19:14.578 20:14:12 -- target/invalid.sh@25 -- # string+=b 00:19:14.578 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.578 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # printf %x 66 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # string+=B 00:19:14.838 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.838 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # printf %x 85 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x55' 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # string+=U 00:19:14.838 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.838 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # printf %x 111 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # string+=o 00:19:14.838 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.838 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # printf %x 64 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x40' 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # string+=@ 00:19:14.838 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.838 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # printf %x 80 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:14.838 20:14:12 -- target/invalid.sh@25 -- # string+=P 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 99 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x63' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=c 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 45 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=- 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 104 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x68' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=h 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 108 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=l 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 122 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=z 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 118 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x76' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=v 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 63 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+='?' 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 46 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=. 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 81 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x51' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=Q 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 86 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x56' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=V 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 119 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x77' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=w 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 123 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+='{' 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 52 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x34' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=4 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 53 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x35' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=5 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # printf %x 83 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:14.839 20:14:12 -- target/invalid.sh@25 -- # string+=S 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:14.839 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:14.839 20:14:12 -- target/invalid.sh@28 -- # [[ b == \- ]] 00:19:14.839 20:14:12 -- target/invalid.sh@31 -- # echo 'bBUo@Pc-hlzv?.QVw{45S' 00:19:14.839 20:14:12 -- target/invalid.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'bBUo@Pc-hlzv?.QVw{45S' nqn.2016-06.io.spdk:cnode8945 00:19:15.100 [2024-04-25 20:14:12.773945] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8945: invalid serial number 'bBUo@Pc-hlzv?.QVw{45S' 00:19:15.100 20:14:12 -- target/invalid.sh@54 -- # out='request: 00:19:15.100 { 00:19:15.100 "nqn": "nqn.2016-06.io.spdk:cnode8945", 00:19:15.100 "serial_number": "bBUo@Pc-hlzv?.QVw{45S", 00:19:15.100 "method": "nvmf_create_subsystem", 00:19:15.100 "req_id": 1 00:19:15.100 } 00:19:15.100 Got JSON-RPC error response 00:19:15.100 response: 00:19:15.100 { 00:19:15.100 "code": -32602, 00:19:15.100 "message": "Invalid SN bBUo@Pc-hlzv?.QVw{45S" 00:19:15.100 }' 00:19:15.100 20:14:12 -- target/invalid.sh@55 -- # [[ request: 00:19:15.100 { 00:19:15.100 "nqn": "nqn.2016-06.io.spdk:cnode8945", 00:19:15.100 "serial_number": "bBUo@Pc-hlzv?.QVw{45S", 00:19:15.100 "method": "nvmf_create_subsystem", 00:19:15.100 "req_id": 1 00:19:15.100 } 00:19:15.100 Got JSON-RPC error response 00:19:15.100 response: 00:19:15.100 { 00:19:15.100 "code": -32602, 00:19:15.100 "message": "Invalid SN bBUo@Pc-hlzv?.QVw{45S" 00:19:15.100 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:15.100 20:14:12 -- target/invalid.sh@58 -- # gen_random_s 41 00:19:15.100 20:14:12 -- target/invalid.sh@19 -- # local length=41 ll 00:19:15.100 20:14:12 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:15.100 20:14:12 -- target/invalid.sh@21 -- # local chars 00:19:15.100 20:14:12 -- target/invalid.sh@22 -- # local string 00:19:15.100 20:14:12 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:15.100 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.100 20:14:12 -- target/invalid.sh@25 -- # printf %x 42 00:19:15.100 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:19:15.100 20:14:12 -- target/invalid.sh@25 -- # string+='*' 00:19:15.100 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.100 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 119 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x77' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=w 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 113 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x71' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=q 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 62 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+='>' 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 99 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x63' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=c 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 65 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x41' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=A 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 125 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+='}' 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 83 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=S 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 116 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x74' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=t 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 82 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x52' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=R 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 49 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=1 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 44 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=, 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 73 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=I 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 105 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x69' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=i 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 55 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x37' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=7 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 50 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=2 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 50 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=2 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 83 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=S 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 74 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=J 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 97 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x61' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=a 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 121 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=y 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 70 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x46' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=F 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 80 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=P 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 33 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x21' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+='!' 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 44 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=, 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 88 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x58' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=X 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 71 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x47' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=G 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 68 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x44' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=D 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:12 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # printf %x 89 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # echo -e '\x59' 00:19:15.101 20:14:12 -- target/invalid.sh@25 -- # string+=Y 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # printf %x 46 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # string+=. 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # printf %x 95 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # string+=_ 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # printf %x 63 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # string+='?' 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # printf %x 104 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x68' 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # string+=h 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.101 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.101 20:14:13 -- target/invalid.sh@25 -- # printf %x 38 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x26' 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # string+='&' 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # printf %x 64 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x40' 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # string+=@ 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # printf %x 79 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # string+=O 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # printf %x 44 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # string+=, 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # printf %x 73 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # string+=I 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # printf %x 49 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # string+=1 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # printf %x 61 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # string+== 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # printf %x 82 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # echo -e '\x52' 00:19:15.362 20:14:13 -- target/invalid.sh@25 -- # string+=R 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.362 20:14:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.362 20:14:13 -- target/invalid.sh@28 -- # [[ * == \- ]] 00:19:15.362 20:14:13 -- target/invalid.sh@31 -- # echo '*wq>cA}StR1,Ii722SJayFP!,XGDY._?h&@O,I1=R' 00:19:15.362 20:14:13 -- target/invalid.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '*wq>cA}StR1,Ii722SJayFP!,XGDY._?h&@O,I1=R' nqn.2016-06.io.spdk:cnode22623 00:19:15.362 [2024-04-25 20:14:13.214460] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22623: invalid model number '*wq>cA}StR1,Ii722SJayFP!,XGDY._?h&@O,I1=R' 00:19:15.362 20:14:13 -- target/invalid.sh@58 -- # out='request: 00:19:15.362 { 00:19:15.362 "nqn": "nqn.2016-06.io.spdk:cnode22623", 00:19:15.362 "model_number": "*wq>cA}StR1,Ii722SJayFP!,XGDY._?h&@O,I1=R", 00:19:15.362 "method": "nvmf_create_subsystem", 00:19:15.362 "req_id": 1 00:19:15.362 } 00:19:15.362 Got JSON-RPC error response 00:19:15.362 response: 00:19:15.362 { 00:19:15.362 "code": -32602, 00:19:15.362 "message": "Invalid MN *wq>cA}StR1,Ii722SJayFP!,XGDY._?h&@O,I1=R" 00:19:15.362 }' 00:19:15.362 20:14:13 -- target/invalid.sh@59 -- # [[ request: 00:19:15.362 { 00:19:15.362 "nqn": "nqn.2016-06.io.spdk:cnode22623", 00:19:15.362 "model_number": "*wq>cA}StR1,Ii722SJayFP!,XGDY._?h&@O,I1=R", 00:19:15.362 "method": "nvmf_create_subsystem", 00:19:15.362 "req_id": 1 00:19:15.362 } 00:19:15.362 Got JSON-RPC error response 00:19:15.362 response: 00:19:15.362 { 00:19:15.362 "code": -32602, 00:19:15.362 "message": "Invalid MN *wq>cA}StR1,Ii722SJayFP!,XGDY._?h&@O,I1=R" 00:19:15.362 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:15.362 20:14:13 -- target/invalid.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:19:15.622 [2024-04-25 20:14:13.370793] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.622 20:14:13 -- target/invalid.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:19:15.890 20:14:13 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:19:15.890 20:14:13 -- target/invalid.sh@67 -- # echo '' 00:19:15.890 20:14:13 -- target/invalid.sh@67 -- # head -n 1 00:19:15.890 20:14:13 -- target/invalid.sh@67 -- # IP= 00:19:15.890 20:14:13 -- target/invalid.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:19:15.890 [2024-04-25 20:14:13.699195] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:19:15.890 20:14:13 -- target/invalid.sh@69 -- # out='request: 00:19:15.890 { 00:19:15.890 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:15.890 "listen_address": { 00:19:15.890 "trtype": "tcp", 00:19:15.890 "traddr": "", 00:19:15.890 "trsvcid": "4421" 00:19:15.890 }, 00:19:15.890 "method": "nvmf_subsystem_remove_listener", 00:19:15.890 "req_id": 1 00:19:15.890 } 00:19:15.890 Got JSON-RPC error response 00:19:15.890 response: 00:19:15.890 { 00:19:15.890 "code": -32602, 00:19:15.890 "message": "Invalid parameters" 00:19:15.890 }' 00:19:15.890 20:14:13 -- target/invalid.sh@70 -- # [[ request: 00:19:15.890 { 00:19:15.890 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:15.890 "listen_address": { 00:19:15.890 "trtype": "tcp", 00:19:15.890 "traddr": "", 00:19:15.890 "trsvcid": "4421" 00:19:15.890 }, 00:19:15.890 "method": "nvmf_subsystem_remove_listener", 00:19:15.890 "req_id": 1 00:19:15.890 } 00:19:15.890 Got JSON-RPC error response 00:19:15.890 response: 00:19:15.890 { 00:19:15.890 "code": -32602, 00:19:15.890 "message": "Invalid parameters" 00:19:15.890 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:19:15.890 20:14:13 -- target/invalid.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10894 -i 0 00:19:16.181 [2024-04-25 20:14:13.855365] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10894: invalid cntlid range [0-65519] 00:19:16.181 20:14:13 -- target/invalid.sh@73 -- # out='request: 00:19:16.181 { 00:19:16.181 "nqn": "nqn.2016-06.io.spdk:cnode10894", 00:19:16.181 "min_cntlid": 0, 00:19:16.181 "method": "nvmf_create_subsystem", 00:19:16.181 "req_id": 1 00:19:16.181 } 00:19:16.181 Got JSON-RPC error response 00:19:16.181 response: 00:19:16.181 { 00:19:16.181 "code": -32602, 00:19:16.181 "message": "Invalid cntlid range [0-65519]" 00:19:16.181 }' 00:19:16.181 20:14:13 -- target/invalid.sh@74 -- # [[ request: 00:19:16.181 { 00:19:16.181 "nqn": "nqn.2016-06.io.spdk:cnode10894", 00:19:16.181 "min_cntlid": 0, 00:19:16.181 "method": "nvmf_create_subsystem", 00:19:16.181 "req_id": 1 00:19:16.181 } 00:19:16.181 Got JSON-RPC error response 00:19:16.181 response: 00:19:16.181 { 00:19:16.181 "code": -32602, 00:19:16.181 "message": "Invalid cntlid range [0-65519]" 00:19:16.181 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.181 20:14:13 -- target/invalid.sh@75 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode581 -i 65520 00:19:16.181 [2024-04-25 20:14:14.015546] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode581: invalid cntlid range [65520-65519] 00:19:16.181 20:14:14 -- target/invalid.sh@75 -- # out='request: 00:19:16.181 { 00:19:16.181 "nqn": "nqn.2016-06.io.spdk:cnode581", 00:19:16.181 "min_cntlid": 65520, 00:19:16.181 "method": "nvmf_create_subsystem", 00:19:16.181 "req_id": 1 00:19:16.181 } 00:19:16.181 Got JSON-RPC error response 00:19:16.181 response: 00:19:16.181 { 00:19:16.181 "code": -32602, 00:19:16.181 "message": "Invalid cntlid range [65520-65519]" 00:19:16.181 }' 00:19:16.181 20:14:14 -- target/invalid.sh@76 -- # [[ request: 00:19:16.181 { 00:19:16.181 "nqn": "nqn.2016-06.io.spdk:cnode581", 00:19:16.181 "min_cntlid": 65520, 00:19:16.181 "method": "nvmf_create_subsystem", 00:19:16.181 "req_id": 1 00:19:16.181 } 00:19:16.181 Got JSON-RPC error response 00:19:16.181 response: 00:19:16.181 { 00:19:16.181 "code": -32602, 00:19:16.181 "message": "Invalid cntlid range [65520-65519]" 00:19:16.181 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.181 20:14:14 -- target/invalid.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22474 -I 0 00:19:16.441 [2024-04-25 20:14:14.175726] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22474: invalid cntlid range [1-0] 00:19:16.441 20:14:14 -- target/invalid.sh@77 -- # out='request: 00:19:16.441 { 00:19:16.441 "nqn": "nqn.2016-06.io.spdk:cnode22474", 00:19:16.441 "max_cntlid": 0, 00:19:16.441 "method": "nvmf_create_subsystem", 00:19:16.441 "req_id": 1 00:19:16.441 } 00:19:16.441 Got JSON-RPC error response 00:19:16.441 response: 00:19:16.441 { 00:19:16.441 "code": -32602, 00:19:16.441 "message": "Invalid cntlid range [1-0]" 00:19:16.441 }' 00:19:16.441 20:14:14 -- target/invalid.sh@78 -- # [[ request: 00:19:16.441 { 00:19:16.441 "nqn": "nqn.2016-06.io.spdk:cnode22474", 00:19:16.441 "max_cntlid": 0, 00:19:16.441 "method": "nvmf_create_subsystem", 00:19:16.441 "req_id": 1 00:19:16.441 } 00:19:16.441 Got JSON-RPC error response 00:19:16.441 response: 00:19:16.441 { 00:19:16.441 "code": -32602, 00:19:16.441 "message": "Invalid cntlid range [1-0]" 00:19:16.441 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.441 20:14:14 -- target/invalid.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27633 -I 65520 00:19:16.441 [2024-04-25 20:14:14.335925] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27633: invalid cntlid range [1-65520] 00:19:16.441 20:14:14 -- target/invalid.sh@79 -- # out='request: 00:19:16.441 { 00:19:16.441 "nqn": "nqn.2016-06.io.spdk:cnode27633", 00:19:16.441 "max_cntlid": 65520, 00:19:16.441 "method": "nvmf_create_subsystem", 00:19:16.441 "req_id": 1 00:19:16.441 } 00:19:16.441 Got JSON-RPC error response 00:19:16.441 response: 00:19:16.441 { 00:19:16.441 "code": -32602, 00:19:16.441 "message": "Invalid cntlid range [1-65520]" 00:19:16.441 }' 00:19:16.441 20:14:14 -- target/invalid.sh@80 -- # [[ request: 00:19:16.441 { 00:19:16.441 "nqn": "nqn.2016-06.io.spdk:cnode27633", 00:19:16.441 "max_cntlid": 65520, 00:19:16.441 "method": "nvmf_create_subsystem", 00:19:16.441 "req_id": 1 00:19:16.441 } 00:19:16.441 Got JSON-RPC error response 00:19:16.441 response: 00:19:16.441 { 00:19:16.441 "code": -32602, 00:19:16.441 "message": "Invalid cntlid range [1-65520]" 00:19:16.441 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.441 20:14:14 -- target/invalid.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20282 -i 6 -I 5 00:19:16.702 [2024-04-25 20:14:14.496164] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20282: invalid cntlid range [6-5] 00:19:16.702 20:14:14 -- target/invalid.sh@83 -- # out='request: 00:19:16.702 { 00:19:16.702 "nqn": "nqn.2016-06.io.spdk:cnode20282", 00:19:16.702 "min_cntlid": 6, 00:19:16.702 "max_cntlid": 5, 00:19:16.702 "method": "nvmf_create_subsystem", 00:19:16.702 "req_id": 1 00:19:16.702 } 00:19:16.702 Got JSON-RPC error response 00:19:16.702 response: 00:19:16.702 { 00:19:16.702 "code": -32602, 00:19:16.702 "message": "Invalid cntlid range [6-5]" 00:19:16.702 }' 00:19:16.702 20:14:14 -- target/invalid.sh@84 -- # [[ request: 00:19:16.702 { 00:19:16.702 "nqn": "nqn.2016-06.io.spdk:cnode20282", 00:19:16.702 "min_cntlid": 6, 00:19:16.702 "max_cntlid": 5, 00:19:16.702 "method": "nvmf_create_subsystem", 00:19:16.702 "req_id": 1 00:19:16.702 } 00:19:16.702 Got JSON-RPC error response 00:19:16.702 response: 00:19:16.702 { 00:19:16.702 "code": -32602, 00:19:16.702 "message": "Invalid cntlid range [6-5]" 00:19:16.702 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.702 20:14:14 -- target/invalid.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:19:16.702 20:14:14 -- target/invalid.sh@87 -- # out='request: 00:19:16.702 { 00:19:16.702 "name": "foobar", 00:19:16.702 "method": "nvmf_delete_target", 00:19:16.702 "req_id": 1 00:19:16.702 } 00:19:16.702 Got JSON-RPC error response 00:19:16.702 response: 00:19:16.702 { 00:19:16.702 "code": -32602, 00:19:16.702 "message": "The specified target doesn'\''t exist, cannot delete it." 00:19:16.702 }' 00:19:16.702 20:14:14 -- target/invalid.sh@88 -- # [[ request: 00:19:16.702 { 00:19:16.702 "name": "foobar", 00:19:16.702 "method": "nvmf_delete_target", 00:19:16.702 "req_id": 1 00:19:16.702 } 00:19:16.702 Got JSON-RPC error response 00:19:16.702 response: 00:19:16.702 { 00:19:16.702 "code": -32602, 00:19:16.702 "message": "The specified target doesn't exist, cannot delete it." 00:19:16.702 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:19:16.702 20:14:14 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:19:16.702 20:14:14 -- target/invalid.sh@91 -- # nvmftestfini 00:19:16.702 20:14:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.702 20:14:14 -- nvmf/common.sh@116 -- # sync 00:19:16.702 20:14:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:16.702 20:14:14 -- nvmf/common.sh@119 -- # set +e 00:19:16.702 20:14:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.702 20:14:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:16.702 rmmod nvme_tcp 00:19:16.963 rmmod nvme_fabrics 00:19:16.963 rmmod nvme_keyring 00:19:16.963 20:14:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.963 20:14:14 -- nvmf/common.sh@123 -- # set -e 00:19:16.963 20:14:14 -- nvmf/common.sh@124 -- # return 0 00:19:16.963 20:14:14 -- nvmf/common.sh@477 -- # '[' -n 1513036 ']' 00:19:16.963 20:14:14 -- nvmf/common.sh@478 -- # killprocess 1513036 00:19:16.963 20:14:14 -- common/autotest_common.sh@926 -- # '[' -z 1513036 ']' 00:19:16.963 20:14:14 -- common/autotest_common.sh@930 -- # kill -0 1513036 00:19:16.963 20:14:14 -- common/autotest_common.sh@931 -- # uname 00:19:16.963 20:14:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:16.963 20:14:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1513036 00:19:16.963 20:14:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:16.963 20:14:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:16.963 20:14:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1513036' 00:19:16.963 killing process with pid 1513036 00:19:16.963 20:14:14 -- common/autotest_common.sh@945 -- # kill 1513036 00:19:16.963 20:14:14 -- common/autotest_common.sh@950 -- # wait 1513036 00:19:17.534 20:14:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:17.534 20:14:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:17.534 20:14:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:17.534 20:14:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.534 20:14:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:17.534 20:14:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.534 20:14:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.534 20:14:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.445 20:14:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:19.445 00:19:19.445 real 0m12.167s 00:19:19.445 user 0m17.851s 00:19:19.445 sys 0m5.521s 00:19:19.445 20:14:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.445 20:14:17 -- common/autotest_common.sh@10 -- # set +x 00:19:19.445 ************************************ 00:19:19.445 END TEST nvmf_invalid 00:19:19.445 ************************************ 00:19:19.445 20:14:17 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:19.445 20:14:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:19.445 20:14:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:19.445 20:14:17 -- common/autotest_common.sh@10 -- # set +x 00:19:19.445 ************************************ 00:19:19.445 START TEST nvmf_abort 00:19:19.445 ************************************ 00:19:19.445 20:14:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:19.445 * Looking for test storage... 00:19:19.445 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:19.445 20:14:17 -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.445 20:14:17 -- nvmf/common.sh@7 -- # uname -s 00:19:19.445 20:14:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.445 20:14:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.445 20:14:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.445 20:14:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.445 20:14:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.445 20:14:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.445 20:14:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.445 20:14:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.445 20:14:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.445 20:14:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.445 20:14:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:19.445 20:14:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:19.445 20:14:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.445 20:14:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.445 20:14:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:19.445 20:14:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:19.445 20:14:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.445 20:14:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.445 20:14:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.445 20:14:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.445 20:14:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.445 20:14:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.445 20:14:17 -- paths/export.sh@5 -- # export PATH 00:19:19.445 20:14:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.445 20:14:17 -- nvmf/common.sh@46 -- # : 0 00:19:19.445 20:14:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:19.445 20:14:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:19.445 20:14:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:19.446 20:14:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.446 20:14:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.446 20:14:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:19.446 20:14:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:19.446 20:14:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:19.446 20:14:17 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.446 20:14:17 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:19:19.446 20:14:17 -- target/abort.sh@14 -- # nvmftestinit 00:19:19.446 20:14:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:19.446 20:14:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.446 20:14:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:19.446 20:14:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:19.446 20:14:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:19.446 20:14:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.446 20:14:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.446 20:14:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.446 20:14:17 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:19.446 20:14:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:19.446 20:14:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:19.446 20:14:17 -- common/autotest_common.sh@10 -- # set +x 00:19:26.020 20:14:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:26.020 20:14:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:26.020 20:14:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:26.020 20:14:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:26.020 20:14:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:26.020 20:14:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:26.020 20:14:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:26.020 20:14:22 -- nvmf/common.sh@294 -- # net_devs=() 00:19:26.020 20:14:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:26.020 20:14:22 -- nvmf/common.sh@295 -- # e810=() 00:19:26.020 20:14:22 -- nvmf/common.sh@295 -- # local -ga e810 00:19:26.020 20:14:22 -- nvmf/common.sh@296 -- # x722=() 00:19:26.020 20:14:22 -- nvmf/common.sh@296 -- # local -ga x722 00:19:26.020 20:14:22 -- nvmf/common.sh@297 -- # mlx=() 00:19:26.020 20:14:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:26.020 20:14:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.020 20:14:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:26.020 20:14:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:26.020 20:14:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:26.020 20:14:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:26.020 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:26.020 20:14:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:26.020 20:14:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:26.020 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:26.020 20:14:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:26.020 20:14:22 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:26.020 20:14:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.020 20:14:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:26.020 20:14:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.020 20:14:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:26.020 Found net devices under 0000:27:00.0: cvl_0_0 00:19:26.020 20:14:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.020 20:14:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:26.020 20:14:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.020 20:14:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:26.020 20:14:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.020 20:14:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:26.020 Found net devices under 0000:27:00.1: cvl_0_1 00:19:26.020 20:14:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.020 20:14:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:26.020 20:14:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:26.020 20:14:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:26.020 20:14:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.020 20:14:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.020 20:14:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.020 20:14:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:26.020 20:14:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.020 20:14:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.020 20:14:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:26.020 20:14:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.020 20:14:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.020 20:14:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:26.020 20:14:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:26.020 20:14:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.020 20:14:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.020 20:14:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.020 20:14:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.020 20:14:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:26.020 20:14:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.020 20:14:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.020 20:14:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.020 20:14:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:26.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:19:26.020 00:19:26.020 --- 10.0.0.2 ping statistics --- 00:19:26.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.020 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:19:26.020 20:14:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:19:26.020 00:19:26.020 --- 10.0.0.1 ping statistics --- 00:19:26.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.020 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:19:26.020 20:14:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.020 20:14:22 -- nvmf/common.sh@410 -- # return 0 00:19:26.020 20:14:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:26.020 20:14:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.020 20:14:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:26.020 20:14:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.020 20:14:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:26.020 20:14:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:26.020 20:14:22 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:19:26.020 20:14:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:26.020 20:14:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:26.020 20:14:22 -- common/autotest_common.sh@10 -- # set +x 00:19:26.020 20:14:22 -- nvmf/common.sh@469 -- # nvmfpid=1517875 00:19:26.020 20:14:22 -- nvmf/common.sh@470 -- # waitforlisten 1517875 00:19:26.020 20:14:22 -- common/autotest_common.sh@819 -- # '[' -z 1517875 ']' 00:19:26.020 20:14:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.020 20:14:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:26.020 20:14:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.020 20:14:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:26.020 20:14:22 -- common/autotest_common.sh@10 -- # set +x 00:19:26.021 20:14:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:26.021 [2024-04-25 20:14:23.044931] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:26.021 [2024-04-25 20:14:23.045043] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.021 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.021 [2024-04-25 20:14:23.170961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:26.021 [2024-04-25 20:14:23.268700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:26.021 [2024-04-25 20:14:23.268882] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.021 [2024-04-25 20:14:23.268895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.021 [2024-04-25 20:14:23.268905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.021 [2024-04-25 20:14:23.269048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.021 [2024-04-25 20:14:23.269156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.021 [2024-04-25 20:14:23.269167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.021 20:14:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:26.021 20:14:23 -- common/autotest_common.sh@852 -- # return 0 00:19:26.021 20:14:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:26.021 20:14:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:26.021 20:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:26.021 20:14:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.021 20:14:23 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:19:26.021 20:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.021 20:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:26.021 [2024-04-25 20:14:23.802077] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.021 20:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.021 20:14:23 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:19:26.021 20:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.021 20:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:26.021 Malloc0 00:19:26.021 20:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.021 20:14:23 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:26.021 20:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.021 20:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:26.021 Delay0 00:19:26.021 20:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.021 20:14:23 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:26.021 20:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.021 20:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:26.021 20:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.021 20:14:23 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:19:26.021 20:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.021 20:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:26.021 20:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.021 20:14:23 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:26.021 20:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.021 20:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:26.021 [2024-04-25 20:14:23.891387] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.021 20:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.021 20:14:23 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:26.021 20:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.021 20:14:23 -- common/autotest_common.sh@10 -- # set +x 00:19:26.021 20:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.021 20:14:23 -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:19:26.280 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.280 [2024-04-25 20:14:24.085896] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:28.813 Initializing NVMe Controllers 00:19:28.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:28.813 controller IO queue size 128 less than required 00:19:28.813 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:19:28.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:28.813 Initialization complete. Launching workers. 00:19:28.813 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 47865 00:19:28.813 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 47926, failed to submit 62 00:19:28.813 success 47865, unsuccess 61, failed 0 00:19:28.813 20:14:26 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:28.813 20:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.813 20:14:26 -- common/autotest_common.sh@10 -- # set +x 00:19:28.813 20:14:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.813 20:14:26 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:28.813 20:14:26 -- target/abort.sh@38 -- # nvmftestfini 00:19:28.813 20:14:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:28.813 20:14:26 -- nvmf/common.sh@116 -- # sync 00:19:28.813 20:14:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:28.813 20:14:26 -- nvmf/common.sh@119 -- # set +e 00:19:28.813 20:14:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:28.813 20:14:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:28.813 rmmod nvme_tcp 00:19:28.813 rmmod nvme_fabrics 00:19:28.813 rmmod nvme_keyring 00:19:28.813 20:14:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:28.813 20:14:26 -- nvmf/common.sh@123 -- # set -e 00:19:28.813 20:14:26 -- nvmf/common.sh@124 -- # return 0 00:19:28.813 20:14:26 -- nvmf/common.sh@477 -- # '[' -n 1517875 ']' 00:19:28.813 20:14:26 -- nvmf/common.sh@478 -- # killprocess 1517875 00:19:28.813 20:14:26 -- common/autotest_common.sh@926 -- # '[' -z 1517875 ']' 00:19:28.813 20:14:26 -- common/autotest_common.sh@930 -- # kill -0 1517875 00:19:28.813 20:14:26 -- common/autotest_common.sh@931 -- # uname 00:19:28.813 20:14:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:28.813 20:14:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1517875 00:19:28.813 20:14:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:28.813 20:14:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:28.813 20:14:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1517875' 00:19:28.813 killing process with pid 1517875 00:19:28.813 20:14:26 -- common/autotest_common.sh@945 -- # kill 1517875 00:19:28.813 20:14:26 -- common/autotest_common.sh@950 -- # wait 1517875 00:19:29.071 20:14:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:29.071 20:14:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:29.071 20:14:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:29.071 20:14:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.071 20:14:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:29.071 20:14:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.071 20:14:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.071 20:14:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.980 20:14:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:30.980 00:19:30.980 real 0m11.533s 00:19:30.980 user 0m13.806s 00:19:30.980 sys 0m4.745s 00:19:30.980 20:14:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.980 20:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:30.980 ************************************ 00:19:30.980 END TEST nvmf_abort 00:19:30.980 ************************************ 00:19:30.980 20:14:28 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:30.980 20:14:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:30.980 20:14:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:30.980 20:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:30.980 ************************************ 00:19:30.980 START TEST nvmf_ns_hotplug_stress 00:19:30.980 ************************************ 00:19:30.980 20:14:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:31.241 * Looking for test storage... 00:19:31.241 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:31.241 20:14:28 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.241 20:14:28 -- nvmf/common.sh@7 -- # uname -s 00:19:31.241 20:14:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.241 20:14:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.241 20:14:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.241 20:14:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.241 20:14:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.241 20:14:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.241 20:14:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.241 20:14:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.241 20:14:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.241 20:14:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.241 20:14:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:31.241 20:14:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:31.241 20:14:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.241 20:14:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.241 20:14:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:31.241 20:14:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:31.241 20:14:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.241 20:14:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.241 20:14:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.241 20:14:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.241 20:14:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.241 20:14:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.241 20:14:28 -- paths/export.sh@5 -- # export PATH 00:19:31.241 20:14:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.241 20:14:28 -- nvmf/common.sh@46 -- # : 0 00:19:31.241 20:14:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:31.241 20:14:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:31.241 20:14:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:31.241 20:14:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.241 20:14:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.241 20:14:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:31.241 20:14:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:31.241 20:14:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:31.241 20:14:28 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:31.241 20:14:28 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:19:31.241 20:14:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:31.241 20:14:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.241 20:14:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:31.241 20:14:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:31.241 20:14:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:31.241 20:14:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.241 20:14:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.241 20:14:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.241 20:14:28 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:31.241 20:14:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:31.241 20:14:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:31.241 20:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:37.817 20:14:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.817 20:14:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:37.817 20:14:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:37.817 20:14:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:37.817 20:14:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:37.817 20:14:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:37.817 20:14:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:37.817 20:14:34 -- nvmf/common.sh@294 -- # net_devs=() 00:19:37.817 20:14:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:37.817 20:14:34 -- nvmf/common.sh@295 -- # e810=() 00:19:37.817 20:14:34 -- nvmf/common.sh@295 -- # local -ga e810 00:19:37.817 20:14:34 -- nvmf/common.sh@296 -- # x722=() 00:19:37.817 20:14:34 -- nvmf/common.sh@296 -- # local -ga x722 00:19:37.817 20:14:34 -- nvmf/common.sh@297 -- # mlx=() 00:19:37.817 20:14:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:37.817 20:14:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.817 20:14:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:37.817 20:14:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:37.817 20:14:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.817 20:14:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:37.817 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:37.817 20:14:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.817 20:14:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:37.817 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:37.817 20:14:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:37.817 20:14:34 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.817 20:14:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.817 20:14:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.817 20:14:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.817 20:14:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:37.817 Found net devices under 0000:27:00.0: cvl_0_0 00:19:37.817 20:14:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.817 20:14:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.817 20:14:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.817 20:14:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.817 20:14:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.817 20:14:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:37.817 Found net devices under 0000:27:00.1: cvl_0_1 00:19:37.817 20:14:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.817 20:14:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:37.817 20:14:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:37.817 20:14:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:37.817 20:14:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:37.817 20:14:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.817 20:14:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.817 20:14:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.817 20:14:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:37.817 20:14:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.817 20:14:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.817 20:14:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:37.817 20:14:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.817 20:14:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.817 20:14:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:37.817 20:14:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:37.817 20:14:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.817 20:14:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.817 20:14:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.817 20:14:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.817 20:14:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:37.817 20:14:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.817 20:14:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.817 20:14:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.817 20:14:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:37.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:19:37.817 00:19:37.817 --- 10.0.0.2 ping statistics --- 00:19:37.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.817 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:19:37.817 20:14:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:19:37.817 00:19:37.817 --- 10.0.0.1 ping statistics --- 00:19:37.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.817 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:19:37.817 20:14:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.817 20:14:35 -- nvmf/common.sh@410 -- # return 0 00:19:37.817 20:14:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:37.817 20:14:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.817 20:14:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:37.817 20:14:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:37.817 20:14:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.817 20:14:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:37.817 20:14:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:37.817 20:14:35 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:19:37.817 20:14:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:37.817 20:14:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:37.818 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:19:37.818 20:14:35 -- nvmf/common.sh@469 -- # nvmfpid=1522596 00:19:37.818 20:14:35 -- nvmf/common.sh@470 -- # waitforlisten 1522596 00:19:37.818 20:14:35 -- common/autotest_common.sh@819 -- # '[' -z 1522596 ']' 00:19:37.818 20:14:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.818 20:14:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:37.818 20:14:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.818 20:14:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:37.818 20:14:35 -- common/autotest_common.sh@10 -- # set +x 00:19:37.818 20:14:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:37.818 [2024-04-25 20:14:35.316179] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:37.818 [2024-04-25 20:14:35.316308] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.818 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.818 [2024-04-25 20:14:35.454608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:37.818 [2024-04-25 20:14:35.554563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:37.818 [2024-04-25 20:14:35.554795] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.818 [2024-04-25 20:14:35.554811] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.818 [2024-04-25 20:14:35.554823] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.818 [2024-04-25 20:14:35.554986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.818 [2024-04-25 20:14:35.555091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.818 [2024-04-25 20:14:35.555102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:38.388 20:14:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:38.388 20:14:36 -- common/autotest_common.sh@852 -- # return 0 00:19:38.388 20:14:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:38.388 20:14:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:38.388 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:19:38.388 20:14:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.388 20:14:36 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:19:38.388 20:14:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:38.388 [2024-04-25 20:14:36.203806] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.388 20:14:36 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:38.649 20:14:36 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.649 [2024-04-25 20:14:36.529335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.649 20:14:36 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:38.908 20:14:36 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:19:39.167 Malloc0 00:19:39.167 20:14:36 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:39.167 Delay0 00:19:39.167 20:14:37 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:39.426 20:14:37 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:19:39.426 NULL1 00:19:39.426 20:14:37 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:39.687 20:14:37 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1523140 00:19:39.687 20:14:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:39.687 20:14:37 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:19:39.687 20:14:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:39.687 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.947 20:14:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:39.947 20:14:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:19:39.947 20:14:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:19:40.205 [2024-04-25 20:14:37.903160] bdev.c:4963:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:19:40.205 true 00:19:40.205 20:14:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:40.206 20:14:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:40.206 20:14:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:40.464 20:14:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:19:40.464 20:14:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:19:40.464 true 00:19:40.722 20:14:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:40.722 20:14:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:40.722 20:14:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:40.982 20:14:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:19:40.982 20:14:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:19:40.982 true 00:19:40.982 20:14:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:40.982 20:14:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:41.241 20:14:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.241 20:14:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:19:41.241 20:14:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:19:41.502 true 00:19:41.502 20:14:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:41.502 20:14:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:41.502 20:14:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.762 20:14:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:19:41.762 20:14:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:19:42.022 true 00:19:42.022 20:14:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:42.022 20:14:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:42.022 20:14:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:42.280 20:14:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:19:42.280 20:14:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:19:42.280 true 00:19:42.573 20:14:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:42.573 20:14:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:42.573 20:14:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:42.573 20:14:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:19:42.573 20:14:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:19:42.833 true 00:19:42.833 20:14:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:42.833 20:14:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:43.094 20:14:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:43.094 20:14:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:19:43.094 20:14:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:19:43.356 true 00:19:43.356 20:14:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:43.356 20:14:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:43.356 20:14:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:43.615 20:14:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:19:43.615 20:14:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:19:43.615 true 00:19:43.615 20:14:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:43.615 20:14:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:43.875 20:14:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:44.132 20:14:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:19:44.132 20:14:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:19:44.132 true 00:19:44.132 20:14:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:44.132 20:14:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:44.390 20:14:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:44.390 20:14:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:19:44.390 20:14:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:19:44.650 true 00:19:44.650 20:14:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:44.650 20:14:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:44.650 20:14:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:44.911 20:14:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:19:44.911 20:14:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:19:45.172 true 00:19:45.172 20:14:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:45.172 20:14:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.172 20:14:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.431 20:14:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:19:45.431 20:14:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:19:45.431 true 00:19:45.431 20:14:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:45.431 20:14:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.688 20:14:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.947 20:14:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:19:45.947 20:14:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:19:45.947 true 00:19:45.947 20:14:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:45.947 20:14:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:46.205 20:14:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:46.205 20:14:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:19:46.205 20:14:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:19:46.465 true 00:19:46.465 20:14:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:46.465 20:14:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:46.465 20:14:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:46.726 20:14:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:19:46.726 20:14:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:19:46.726 true 00:19:46.726 20:14:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:46.726 20:14:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:46.984 20:14:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:47.243 20:14:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:19:47.243 20:14:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:19:47.243 true 00:19:47.243 20:14:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:47.243 20:14:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:47.502 20:14:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:47.502 20:14:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:19:47.502 20:14:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:19:47.760 true 00:19:47.760 20:14:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:47.760 20:14:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:47.760 20:14:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:48.018 20:14:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:19:48.018 20:14:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:19:48.018 true 00:19:48.276 20:14:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:48.276 20:14:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:48.276 20:14:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:48.534 20:14:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:19:48.534 20:14:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:19:48.534 true 00:19:48.534 20:14:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:48.534 20:14:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:48.791 20:14:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:48.791 20:14:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:19:48.791 20:14:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:19:49.049 true 00:19:49.049 20:14:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:49.049 20:14:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.049 20:14:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:49.306 20:14:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:19:49.307 20:14:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:19:49.307 true 00:19:49.307 20:14:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:49.307 20:14:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.565 20:14:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:49.565 20:14:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:19:49.565 20:14:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:19:49.823 true 00:19:49.823 20:14:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:49.823 20:14:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:50.081 20:14:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:50.081 20:14:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:19:50.081 20:14:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:19:50.339 true 00:19:50.339 20:14:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:50.340 20:14:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:50.340 20:14:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:50.597 20:14:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:19:50.597 20:14:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:19:50.597 true 00:19:50.597 20:14:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:50.597 20:14:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:50.856 20:14:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:50.856 20:14:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:19:50.856 20:14:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:19:51.114 true 00:19:51.114 20:14:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:51.114 20:14:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:51.114 20:14:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:51.372 20:14:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:19:51.372 20:14:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:19:51.372 true 00:19:51.372 20:14:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:51.372 20:14:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:51.630 20:14:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:51.889 20:14:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:19:51.889 20:14:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:19:51.889 true 00:19:51.889 20:14:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:51.889 20:14:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.147 20:14:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:52.147 20:14:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:19:52.147 20:14:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:19:52.407 true 00:19:52.407 20:14:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:52.407 20:14:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.407 20:14:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:52.665 20:14:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:19:52.665 20:14:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:19:52.665 true 00:19:52.665 20:14:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:52.665 20:14:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.922 20:14:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:53.180 20:14:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:19:53.180 20:14:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:19:53.180 true 00:19:53.180 20:14:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:53.180 20:14:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:53.438 20:14:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:53.438 20:14:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:19:53.438 20:14:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:19:53.696 true 00:19:53.696 20:14:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:53.696 20:14:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:53.696 20:14:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:53.954 20:14:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:19:53.954 20:14:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:19:53.954 true 00:19:53.954 20:14:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:53.954 20:14:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.213 20:14:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:54.213 20:14:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:19:54.213 20:14:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:19:54.471 true 00:19:54.471 20:14:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:54.471 20:14:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.729 20:14:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:54.729 20:14:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:19:54.729 20:14:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:19:54.988 true 00:19:54.988 20:14:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:54.988 20:14:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.988 20:14:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:55.246 20:14:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:19:55.246 20:14:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:19:55.246 true 00:19:55.247 20:14:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:55.247 20:14:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.504 20:14:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:55.504 20:14:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:19:55.504 20:14:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:19:55.762 true 00:19:55.762 20:14:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:55.762 20:14:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.762 20:14:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:56.019 20:14:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:19:56.019 20:14:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:19:56.019 true 00:19:56.019 20:14:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:56.019 20:14:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:56.276 20:14:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:56.276 20:14:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:19:56.276 20:14:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:19:56.534 true 00:19:56.534 20:14:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:56.534 20:14:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:56.792 20:14:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:56.792 20:14:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:19:56.792 20:14:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:19:56.792 true 00:19:57.049 20:14:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:57.049 20:14:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.049 20:14:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:57.307 20:14:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:19:57.307 20:14:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:19:57.307 true 00:19:57.307 20:14:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:57.307 20:14:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.568 20:14:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:57.568 20:14:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:19:57.568 20:14:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:19:57.828 true 00:19:57.828 20:14:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:57.828 20:14:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.828 20:14:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:58.087 20:14:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:19:58.087 20:14:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:19:58.087 true 00:19:58.087 20:14:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:58.087 20:14:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:58.382 20:14:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:58.382 20:14:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:19:58.382 20:14:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:19:58.640 true 00:19:58.640 20:14:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:58.640 20:14:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:58.640 20:14:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:58.898 20:14:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:19:58.898 20:14:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:19:59.157 true 00:19:59.157 20:14:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:59.157 20:14:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:59.157 20:14:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:59.415 20:14:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:19:59.415 20:14:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:19:59.415 true 00:19:59.415 20:14:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:59.415 20:14:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:59.674 20:14:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:59.674 20:14:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:19:59.674 20:14:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:19:59.933 true 00:19:59.933 20:14:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:19:59.933 20:14:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:59.933 20:14:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:00.191 20:14:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:20:00.191 20:14:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:20:00.191 true 00:20:00.191 20:14:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:00.191 20:14:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:00.449 20:14:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:00.708 20:14:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:20:00.708 20:14:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:20:00.708 true 00:20:00.708 20:14:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:00.708 20:14:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:00.967 20:14:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:00.967 20:14:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:20:00.967 20:14:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:20:01.225 true 00:20:01.225 20:14:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:01.225 20:14:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.225 20:14:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:01.483 20:14:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:20:01.483 20:14:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:20:01.483 true 00:20:01.483 20:14:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:01.483 20:14:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.742 20:14:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:01.742 20:14:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:20:01.742 20:14:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:20:02.000 true 00:20:02.000 20:14:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:02.000 20:14:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:02.258 20:14:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:02.258 20:15:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:20:02.258 20:15:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:20:02.517 true 00:20:02.517 20:15:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:02.517 20:15:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:02.517 20:15:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:02.777 20:15:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:20:02.777 20:15:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:20:02.777 true 00:20:02.777 20:15:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:02.777 20:15:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.036 20:15:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:03.294 20:15:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:20:03.294 20:15:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:20:03.294 true 00:20:03.294 20:15:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:03.294 20:15:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.551 20:15:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:03.551 20:15:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:20:03.551 20:15:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:20:03.809 true 00:20:03.809 20:15:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:03.809 20:15:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.810 20:15:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.068 20:15:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:20:04.068 20:15:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:20:04.068 true 00:20:04.068 20:15:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:04.068 20:15:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:04.325 20:15:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.325 20:15:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:20:04.325 20:15:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:20:04.584 true 00:20:04.584 20:15:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:04.584 20:15:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:04.845 20:15:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.845 20:15:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:20:04.845 20:15:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:20:05.104 true 00:20:05.104 20:15:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:05.104 20:15:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.104 20:15:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:05.370 20:15:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1060 00:20:05.370 20:15:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:20:05.370 true 00:20:05.370 20:15:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:05.370 20:15:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.629 20:15:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:05.629 20:15:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1061 00:20:05.629 20:15:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:20:05.888 true 00:20:05.888 20:15:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:05.888 20:15:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.145 20:15:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:06.145 20:15:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1062 00:20:06.145 20:15:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:20:06.402 true 00:20:06.402 20:15:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:06.402 20:15:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.402 20:15:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:06.660 20:15:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1063 00:20:06.660 20:15:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:20:06.660 true 00:20:06.660 20:15:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:06.660 20:15:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.918 20:15:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:06.918 20:15:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1064 00:20:06.918 20:15:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:20:07.177 true 00:20:07.177 20:15:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:07.177 20:15:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.177 20:15:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:07.434 20:15:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1065 00:20:07.434 20:15:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1065 00:20:07.692 true 00:20:07.692 20:15:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:07.692 20:15:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.692 20:15:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:07.950 20:15:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1066 00:20:07.950 20:15:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1066 00:20:07.950 true 00:20:07.950 20:15:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:07.950 20:15:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.210 20:15:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:08.469 20:15:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1067 00:20:08.469 20:15:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1067 00:20:08.469 true 00:20:08.469 20:15:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:08.469 20:15:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.728 20:15:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:08.728 20:15:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1068 00:20:08.728 20:15:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1068 00:20:08.986 true 00:20:08.986 20:15:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:08.986 20:15:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.986 20:15:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:09.244 20:15:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1069 00:20:09.244 20:15:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1069 00:20:09.244 true 00:20:09.244 20:15:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:09.244 20:15:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.504 20:15:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:09.504 20:15:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1070 00:20:09.504 20:15:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1070 00:20:09.764 true 00:20:09.764 20:15:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:09.764 20:15:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.764 20:15:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:10.024 Initializing NVMe Controllers 00:20:10.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.024 Controller IO queue size 128, less than required. 00:20:10.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:10.024 Initialization complete. Launching workers. 00:20:10.024 ======================================================== 00:20:10.024 Latency(us) 00:20:10.024 Device Information : IOPS MiB/s Average min max 00:20:10.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30584.92 14.93 4185.11 1373.09 44109.39 00:20:10.024 ======================================================== 00:20:10.024 Total : 30584.92 14.93 4185.11 1373.09 44109.39 00:20:10.024 00:20:10.024 20:15:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1071 00:20:10.024 20:15:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1071 00:20:10.024 true 00:20:10.024 20:15:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1523140 00:20:10.024 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1523140) - No such process 00:20:10.024 20:15:07 -- target/ns_hotplug_stress.sh@44 -- # wait 1523140 00:20:10.024 20:15:07 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:10.024 20:15:07 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:20:10.024 20:15:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:10.025 20:15:07 -- nvmf/common.sh@116 -- # sync 00:20:10.284 20:15:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:10.284 20:15:07 -- nvmf/common.sh@119 -- # set +e 00:20:10.284 20:15:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:10.284 20:15:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:10.284 rmmod nvme_tcp 00:20:10.284 rmmod nvme_fabrics 00:20:10.284 rmmod nvme_keyring 00:20:10.284 20:15:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:10.284 20:15:08 -- nvmf/common.sh@123 -- # set -e 00:20:10.284 20:15:08 -- nvmf/common.sh@124 -- # return 0 00:20:10.284 20:15:08 -- nvmf/common.sh@477 -- # '[' -n 1522596 ']' 00:20:10.284 20:15:08 -- nvmf/common.sh@478 -- # killprocess 1522596 00:20:10.284 20:15:08 -- common/autotest_common.sh@926 -- # '[' -z 1522596 ']' 00:20:10.284 20:15:08 -- common/autotest_common.sh@930 -- # kill -0 1522596 00:20:10.284 20:15:08 -- common/autotest_common.sh@931 -- # uname 00:20:10.284 20:15:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.284 20:15:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1522596 00:20:10.284 20:15:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:10.284 20:15:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:10.284 20:15:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1522596' 00:20:10.284 killing process with pid 1522596 00:20:10.284 20:15:08 -- common/autotest_common.sh@945 -- # kill 1522596 00:20:10.284 20:15:08 -- common/autotest_common.sh@950 -- # wait 1522596 00:20:10.853 20:15:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:10.853 20:15:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:10.853 20:15:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:10.853 20:15:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.853 20:15:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:10.853 20:15:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.853 20:15:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.853 20:15:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.760 20:15:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:12.760 00:20:12.760 real 0m41.739s 00:20:12.760 user 2m34.375s 00:20:12.760 sys 0m11.821s 00:20:12.760 20:15:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.760 20:15:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 ************************************ 00:20:12.760 END TEST nvmf_ns_hotplug_stress 00:20:12.760 ************************************ 00:20:12.760 20:15:10 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:12.760 20:15:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:12.760 20:15:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:12.760 20:15:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 ************************************ 00:20:12.760 START TEST nvmf_connect_stress 00:20:12.760 ************************************ 00:20:12.760 20:15:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:12.760 * Looking for test storage... 00:20:13.020 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:13.020 20:15:10 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.020 20:15:10 -- nvmf/common.sh@7 -- # uname -s 00:20:13.020 20:15:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.020 20:15:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.020 20:15:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.020 20:15:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.020 20:15:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.020 20:15:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.020 20:15:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.020 20:15:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.020 20:15:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.020 20:15:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.021 20:15:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:13.021 20:15:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:13.021 20:15:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.021 20:15:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.021 20:15:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:13.021 20:15:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:13.021 20:15:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.021 20:15:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.021 20:15:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.021 20:15:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.021 20:15:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.021 20:15:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.021 20:15:10 -- paths/export.sh@5 -- # export PATH 00:20:13.021 20:15:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.021 20:15:10 -- nvmf/common.sh@46 -- # : 0 00:20:13.021 20:15:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:13.021 20:15:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:13.021 20:15:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:13.021 20:15:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.021 20:15:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.021 20:15:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:13.021 20:15:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:13.021 20:15:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:13.021 20:15:10 -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:13.021 20:15:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:13.021 20:15:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.021 20:15:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:13.021 20:15:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:13.021 20:15:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:13.021 20:15:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.021 20:15:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.021 20:15:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.021 20:15:10 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:13.021 20:15:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:13.021 20:15:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:13.021 20:15:10 -- common/autotest_common.sh@10 -- # set +x 00:20:18.338 20:15:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:18.338 20:15:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:18.338 20:15:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:18.338 20:15:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:18.338 20:15:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:18.338 20:15:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:18.338 20:15:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:18.338 20:15:15 -- nvmf/common.sh@294 -- # net_devs=() 00:20:18.338 20:15:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:18.338 20:15:15 -- nvmf/common.sh@295 -- # e810=() 00:20:18.338 20:15:15 -- nvmf/common.sh@295 -- # local -ga e810 00:20:18.338 20:15:15 -- nvmf/common.sh@296 -- # x722=() 00:20:18.338 20:15:15 -- nvmf/common.sh@296 -- # local -ga x722 00:20:18.338 20:15:15 -- nvmf/common.sh@297 -- # mlx=() 00:20:18.338 20:15:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:18.338 20:15:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.338 20:15:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:18.338 20:15:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:18.338 20:15:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:18.338 20:15:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:18.338 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:18.338 20:15:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:18.338 20:15:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:18.338 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:18.338 20:15:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:18.338 20:15:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:18.338 20:15:15 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:18.339 20:15:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:18.339 20:15:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.339 20:15:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:18.339 20:15:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.339 20:15:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:18.339 Found net devices under 0000:27:00.0: cvl_0_0 00:20:18.339 20:15:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.339 20:15:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:18.339 20:15:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.339 20:15:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:18.339 20:15:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.339 20:15:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:18.339 Found net devices under 0000:27:00.1: cvl_0_1 00:20:18.339 20:15:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.339 20:15:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:18.339 20:15:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:18.339 20:15:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:18.339 20:15:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:18.339 20:15:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:18.339 20:15:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.339 20:15:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.339 20:15:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.339 20:15:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:18.339 20:15:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.339 20:15:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.339 20:15:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:18.339 20:15:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.339 20:15:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.339 20:15:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:18.339 20:15:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:18.339 20:15:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.339 20:15:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.339 20:15:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.339 20:15:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.339 20:15:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:18.339 20:15:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.339 20:15:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.339 20:15:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.339 20:15:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:18.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:20:18.339 00:20:18.339 --- 10.0.0.2 ping statistics --- 00:20:18.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.339 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:18.339 20:15:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:20:18.339 00:20:18.339 --- 10.0.0.1 ping statistics --- 00:20:18.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.339 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:18.339 20:15:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.339 20:15:15 -- nvmf/common.sh@410 -- # return 0 00:20:18.339 20:15:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:18.339 20:15:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.339 20:15:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:18.339 20:15:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:18.339 20:15:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.339 20:15:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:18.339 20:15:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:18.339 20:15:15 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:18.339 20:15:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:18.339 20:15:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:18.339 20:15:15 -- common/autotest_common.sh@10 -- # set +x 00:20:18.339 20:15:15 -- nvmf/common.sh@469 -- # nvmfpid=1533854 00:20:18.339 20:15:15 -- nvmf/common.sh@470 -- # waitforlisten 1533854 00:20:18.339 20:15:15 -- common/autotest_common.sh@819 -- # '[' -z 1533854 ']' 00:20:18.339 20:15:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.339 20:15:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.339 20:15:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.339 20:15:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.339 20:15:15 -- common/autotest_common.sh@10 -- # set +x 00:20:18.339 20:15:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:18.339 [2024-04-25 20:15:16.039655] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:18.339 [2024-04-25 20:15:16.039760] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.339 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.339 [2024-04-25 20:15:16.159751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:18.339 [2024-04-25 20:15:16.257711] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:18.339 [2024-04-25 20:15:16.257886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.339 [2024-04-25 20:15:16.257901] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.339 [2024-04-25 20:15:16.257911] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.339 [2024-04-25 20:15:16.258052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.339 [2024-04-25 20:15:16.258152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.339 [2024-04-25 20:15:16.258162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.911 20:15:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.911 20:15:16 -- common/autotest_common.sh@852 -- # return 0 00:20:18.911 20:15:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:18.911 20:15:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:18.911 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.911 20:15:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.911 20:15:16 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.911 20:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.911 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.911 [2024-04-25 20:15:16.793504] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.911 20:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.911 20:15:16 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:18.911 20:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.911 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.911 20:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.911 20:15:16 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.911 20:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.911 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.911 [2024-04-25 20:15:16.830227] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.911 20:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.911 20:15:16 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:18.911 20:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.911 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.911 NULL1 00:20:18.911 20:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.173 20:15:16 -- target/connect_stress.sh@21 -- # PERF_PID=1534081 00:20:19.173 20:15:16 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:19.173 20:15:16 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:19.173 20:15:16 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # seq 1 20 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:19.173 20:15:16 -- target/connect_stress.sh@28 -- # cat 00:20:19.173 20:15:16 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:19.173 20:15:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.173 20:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.173 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.431 20:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.431 20:15:17 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:19.431 20:15:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.431 20:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.431 20:15:17 -- common/autotest_common.sh@10 -- # set +x 00:20:19.690 20:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.690 20:15:17 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:19.690 20:15:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.690 20:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.690 20:15:17 -- common/autotest_common.sh@10 -- # set +x 00:20:20.258 20:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.258 20:15:17 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:20.258 20:15:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.258 20:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.258 20:15:17 -- common/autotest_common.sh@10 -- # set +x 00:20:20.518 20:15:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.518 20:15:18 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:20.518 20:15:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.518 20:15:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.518 20:15:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.778 20:15:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.778 20:15:18 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:20.778 20:15:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.778 20:15:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.778 20:15:18 -- common/autotest_common.sh@10 -- # set +x 00:20:21.038 20:15:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.038 20:15:18 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:21.038 20:15:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.038 20:15:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.038 20:15:18 -- common/autotest_common.sh@10 -- # set +x 00:20:21.296 20:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.296 20:15:19 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:21.296 20:15:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.296 20:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.296 20:15:19 -- common/autotest_common.sh@10 -- # set +x 00:20:21.862 20:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.862 20:15:19 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:21.862 20:15:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.862 20:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.862 20:15:19 -- common/autotest_common.sh@10 -- # set +x 00:20:22.122 20:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.122 20:15:19 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:22.122 20:15:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.122 20:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.122 20:15:19 -- common/autotest_common.sh@10 -- # set +x 00:20:22.383 20:15:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.383 20:15:20 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:22.383 20:15:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.383 20:15:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.383 20:15:20 -- common/autotest_common.sh@10 -- # set +x 00:20:22.644 20:15:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.644 20:15:20 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:22.644 20:15:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.644 20:15:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.644 20:15:20 -- common/autotest_common.sh@10 -- # set +x 00:20:22.903 20:15:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.903 20:15:20 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:22.903 20:15:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.903 20:15:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.903 20:15:20 -- common/autotest_common.sh@10 -- # set +x 00:20:23.470 20:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.470 20:15:21 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:23.470 20:15:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.470 20:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.470 20:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:23.729 20:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.729 20:15:21 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:23.729 20:15:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.729 20:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.729 20:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:23.989 20:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.989 20:15:21 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:23.989 20:15:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.989 20:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.989 20:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:24.248 20:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.248 20:15:22 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:24.249 20:15:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.249 20:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.249 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.509 20:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.509 20:15:22 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:24.509 20:15:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.509 20:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.509 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:20:25.078 20:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.078 20:15:22 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:25.079 20:15:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.079 20:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.079 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:20:25.337 20:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.337 20:15:23 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:25.337 20:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.337 20:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.337 20:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.597 20:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.597 20:15:23 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:25.597 20:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.597 20:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.597 20:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.857 20:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.857 20:15:23 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:25.857 20:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.857 20:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.857 20:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:26.117 20:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.117 20:15:23 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:26.117 20:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.117 20:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.117 20:15:23 -- common/autotest_common.sh@10 -- # set +x 00:20:26.686 20:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.686 20:15:24 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:26.686 20:15:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.686 20:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.686 20:15:24 -- common/autotest_common.sh@10 -- # set +x 00:20:26.943 20:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.943 20:15:24 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:26.943 20:15:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.943 20:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.943 20:15:24 -- common/autotest_common.sh@10 -- # set +x 00:20:27.201 20:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.201 20:15:24 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:27.201 20:15:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.201 20:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.201 20:15:24 -- common/autotest_common.sh@10 -- # set +x 00:20:27.459 20:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.459 20:15:25 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:27.459 20:15:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.459 20:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.459 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:20:27.718 20:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.718 20:15:25 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:27.718 20:15:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.718 20:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.718 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:20:28.289 20:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.289 20:15:25 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:28.289 20:15:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.289 20:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.289 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:20:28.548 20:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.548 20:15:26 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:28.548 20:15:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.548 20:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.548 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:20:28.806 20:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.806 20:15:26 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:28.806 20:15:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.806 20:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.806 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:20:29.063 20:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.063 20:15:26 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:29.063 20:15:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.063 20:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.063 20:15:26 -- common/autotest_common.sh@10 -- # set +x 00:20:29.063 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:29.322 20:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.322 20:15:27 -- target/connect_stress.sh@34 -- # kill -0 1534081 00:20:29.322 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1534081) - No such process 00:20:29.322 20:15:27 -- target/connect_stress.sh@38 -- # wait 1534081 00:20:29.322 20:15:27 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:29.322 20:15:27 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:29.322 20:15:27 -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:29.322 20:15:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:29.322 20:15:27 -- nvmf/common.sh@116 -- # sync 00:20:29.322 20:15:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:29.322 20:15:27 -- nvmf/common.sh@119 -- # set +e 00:20:29.322 20:15:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:29.322 20:15:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:29.322 rmmod nvme_tcp 00:20:29.322 rmmod nvme_fabrics 00:20:29.322 rmmod nvme_keyring 00:20:29.582 20:15:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:29.582 20:15:27 -- nvmf/common.sh@123 -- # set -e 00:20:29.582 20:15:27 -- nvmf/common.sh@124 -- # return 0 00:20:29.582 20:15:27 -- nvmf/common.sh@477 -- # '[' -n 1533854 ']' 00:20:29.582 20:15:27 -- nvmf/common.sh@478 -- # killprocess 1533854 00:20:29.582 20:15:27 -- common/autotest_common.sh@926 -- # '[' -z 1533854 ']' 00:20:29.583 20:15:27 -- common/autotest_common.sh@930 -- # kill -0 1533854 00:20:29.583 20:15:27 -- common/autotest_common.sh@931 -- # uname 00:20:29.583 20:15:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:29.583 20:15:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1533854 00:20:29.583 20:15:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:29.583 20:15:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:29.583 20:15:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1533854' 00:20:29.583 killing process with pid 1533854 00:20:29.583 20:15:27 -- common/autotest_common.sh@945 -- # kill 1533854 00:20:29.583 20:15:27 -- common/autotest_common.sh@950 -- # wait 1533854 00:20:29.843 20:15:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:29.843 20:15:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:29.843 20:15:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:29.843 20:15:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.843 20:15:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:29.843 20:15:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.843 20:15:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.843 20:15:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.385 20:15:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:32.385 00:20:32.385 real 0m19.194s 00:20:32.385 user 0m43.885s 00:20:32.385 sys 0m5.557s 00:20:32.385 20:15:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.385 20:15:29 -- common/autotest_common.sh@10 -- # set +x 00:20:32.385 ************************************ 00:20:32.385 END TEST nvmf_connect_stress 00:20:32.385 ************************************ 00:20:32.385 20:15:29 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:32.385 20:15:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:32.385 20:15:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:32.385 20:15:29 -- common/autotest_common.sh@10 -- # set +x 00:20:32.385 ************************************ 00:20:32.385 START TEST nvmf_fused_ordering 00:20:32.385 ************************************ 00:20:32.385 20:15:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:32.385 * Looking for test storage... 00:20:32.385 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:32.385 20:15:29 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.385 20:15:29 -- nvmf/common.sh@7 -- # uname -s 00:20:32.385 20:15:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.385 20:15:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.385 20:15:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.385 20:15:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.385 20:15:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.385 20:15:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.385 20:15:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.385 20:15:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.385 20:15:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.385 20:15:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.385 20:15:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:32.385 20:15:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:32.385 20:15:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.385 20:15:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.385 20:15:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:32.385 20:15:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:32.385 20:15:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.385 20:15:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.385 20:15:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.385 20:15:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.385 20:15:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.385 20:15:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.385 20:15:29 -- paths/export.sh@5 -- # export PATH 00:20:32.385 20:15:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.385 20:15:29 -- nvmf/common.sh@46 -- # : 0 00:20:32.385 20:15:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:32.385 20:15:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:32.385 20:15:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:32.385 20:15:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.385 20:15:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.385 20:15:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:32.385 20:15:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:32.385 20:15:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:32.385 20:15:29 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:32.385 20:15:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:32.385 20:15:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.385 20:15:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:32.385 20:15:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:32.385 20:15:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:32.385 20:15:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.385 20:15:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.385 20:15:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.385 20:15:29 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:32.385 20:15:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:32.385 20:15:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:32.385 20:15:29 -- common/autotest_common.sh@10 -- # set +x 00:20:37.667 20:15:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:37.667 20:15:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:37.667 20:15:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:37.667 20:15:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:37.667 20:15:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:37.667 20:15:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:37.667 20:15:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:37.667 20:15:35 -- nvmf/common.sh@294 -- # net_devs=() 00:20:37.667 20:15:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:37.667 20:15:35 -- nvmf/common.sh@295 -- # e810=() 00:20:37.667 20:15:35 -- nvmf/common.sh@295 -- # local -ga e810 00:20:37.667 20:15:35 -- nvmf/common.sh@296 -- # x722=() 00:20:37.667 20:15:35 -- nvmf/common.sh@296 -- # local -ga x722 00:20:37.667 20:15:35 -- nvmf/common.sh@297 -- # mlx=() 00:20:37.667 20:15:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:37.667 20:15:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.667 20:15:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:37.667 20:15:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:37.667 20:15:35 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:37.667 20:15:35 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:37.668 20:15:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:37.668 20:15:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:37.668 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:37.668 20:15:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:37.668 20:15:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:37.668 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:37.668 20:15:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:37.668 20:15:35 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:37.668 20:15:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.668 20:15:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:37.668 20:15:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.668 20:15:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:37.668 Found net devices under 0000:27:00.0: cvl_0_0 00:20:37.668 20:15:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.668 20:15:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:37.668 20:15:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.668 20:15:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:37.668 20:15:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.668 20:15:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:37.668 Found net devices under 0000:27:00.1: cvl_0_1 00:20:37.668 20:15:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.668 20:15:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:37.668 20:15:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:37.668 20:15:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:37.668 20:15:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.668 20:15:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.668 20:15:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.668 20:15:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:37.668 20:15:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.668 20:15:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.668 20:15:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:37.668 20:15:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.668 20:15:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.668 20:15:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:37.668 20:15:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:37.668 20:15:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.668 20:15:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.668 20:15:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.668 20:15:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.668 20:15:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:37.668 20:15:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.668 20:15:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.668 20:15:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.668 20:15:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:37.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:20:37.668 00:20:37.668 --- 10.0.0.2 ping statistics --- 00:20:37.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.668 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:20:37.668 20:15:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:20:37.668 00:20:37.668 --- 10.0.0.1 ping statistics --- 00:20:37.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.668 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:37.668 20:15:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.668 20:15:35 -- nvmf/common.sh@410 -- # return 0 00:20:37.668 20:15:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:37.668 20:15:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.668 20:15:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:37.668 20:15:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.668 20:15:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:37.668 20:15:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:37.668 20:15:35 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:37.668 20:15:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:37.668 20:15:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:37.668 20:15:35 -- common/autotest_common.sh@10 -- # set +x 00:20:37.668 20:15:35 -- nvmf/common.sh@469 -- # nvmfpid=1540080 00:20:37.668 20:15:35 -- nvmf/common.sh@470 -- # waitforlisten 1540080 00:20:37.668 20:15:35 -- common/autotest_common.sh@819 -- # '[' -z 1540080 ']' 00:20:37.668 20:15:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.668 20:15:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.668 20:15:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.668 20:15:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.668 20:15:35 -- common/autotest_common.sh@10 -- # set +x 00:20:37.668 20:15:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:37.928 [2024-04-25 20:15:35.614486] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:37.928 [2024-04-25 20:15:35.614597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.928 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.928 [2024-04-25 20:15:35.738289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.928 [2024-04-25 20:15:35.834462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:37.928 [2024-04-25 20:15:35.834641] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.928 [2024-04-25 20:15:35.834654] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.928 [2024-04-25 20:15:35.834664] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.928 [2024-04-25 20:15:35.834689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.499 20:15:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.499 20:15:36 -- common/autotest_common.sh@852 -- # return 0 00:20:38.499 20:15:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:38.499 20:15:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:38.499 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:20:38.499 20:15:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.499 20:15:36 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:38.499 20:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:38.499 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:20:38.499 [2024-04-25 20:15:36.338918] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.499 20:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:38.499 20:15:36 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:38.499 20:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:38.499 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:20:38.499 20:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:38.499 20:15:36 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.499 20:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:38.499 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:20:38.499 [2024-04-25 20:15:36.355078] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.499 20:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:38.499 20:15:36 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:38.499 20:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:38.499 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:20:38.499 NULL1 00:20:38.499 20:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:38.499 20:15:36 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:38.499 20:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:38.499 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:20:38.499 20:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:38.499 20:15:36 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:38.499 20:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:38.499 20:15:36 -- common/autotest_common.sh@10 -- # set +x 00:20:38.499 20:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:38.499 20:15:36 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:38.499 [2024-04-25 20:15:36.418866] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:38.499 [2024-04-25 20:15:36.418942] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540388 ] 00:20:38.823 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.083 Attached to nqn.2016-06.io.spdk:cnode1 00:20:39.083 Namespace ID: 1 size: 1GB 00:20:39.083 fused_ordering(0) 00:20:39.083 fused_ordering(1) 00:20:39.083 fused_ordering(2) 00:20:39.083 fused_ordering(3) 00:20:39.083 fused_ordering(4) 00:20:39.083 fused_ordering(5) 00:20:39.083 fused_ordering(6) 00:20:39.083 fused_ordering(7) 00:20:39.083 fused_ordering(8) 00:20:39.083 fused_ordering(9) 00:20:39.083 fused_ordering(10) 00:20:39.083 fused_ordering(11) 00:20:39.083 fused_ordering(12) 00:20:39.083 fused_ordering(13) 00:20:39.083 fused_ordering(14) 00:20:39.083 fused_ordering(15) 00:20:39.083 fused_ordering(16) 00:20:39.083 fused_ordering(17) 00:20:39.083 fused_ordering(18) 00:20:39.083 fused_ordering(19) 00:20:39.083 fused_ordering(20) 00:20:39.083 fused_ordering(21) 00:20:39.083 fused_ordering(22) 00:20:39.083 fused_ordering(23) 00:20:39.083 fused_ordering(24) 00:20:39.083 fused_ordering(25) 00:20:39.083 fused_ordering(26) 00:20:39.083 fused_ordering(27) 00:20:39.083 fused_ordering(28) 00:20:39.083 fused_ordering(29) 00:20:39.083 fused_ordering(30) 00:20:39.083 fused_ordering(31) 00:20:39.083 fused_ordering(32) 00:20:39.083 fused_ordering(33) 00:20:39.083 fused_ordering(34) 00:20:39.083 fused_ordering(35) 00:20:39.083 fused_ordering(36) 00:20:39.083 fused_ordering(37) 00:20:39.083 fused_ordering(38) 00:20:39.083 fused_ordering(39) 00:20:39.083 fused_ordering(40) 00:20:39.083 fused_ordering(41) 00:20:39.083 fused_ordering(42) 00:20:39.083 fused_ordering(43) 00:20:39.083 fused_ordering(44) 00:20:39.083 fused_ordering(45) 00:20:39.083 fused_ordering(46) 00:20:39.083 fused_ordering(47) 00:20:39.083 fused_ordering(48) 00:20:39.083 fused_ordering(49) 00:20:39.083 fused_ordering(50) 00:20:39.083 fused_ordering(51) 00:20:39.083 fused_ordering(52) 00:20:39.083 fused_ordering(53) 00:20:39.083 fused_ordering(54) 00:20:39.083 fused_ordering(55) 00:20:39.083 fused_ordering(56) 00:20:39.083 fused_ordering(57) 00:20:39.083 fused_ordering(58) 00:20:39.083 fused_ordering(59) 00:20:39.083 fused_ordering(60) 00:20:39.083 fused_ordering(61) 00:20:39.083 fused_ordering(62) 00:20:39.083 fused_ordering(63) 00:20:39.083 fused_ordering(64) 00:20:39.083 fused_ordering(65) 00:20:39.083 fused_ordering(66) 00:20:39.083 fused_ordering(67) 00:20:39.083 fused_ordering(68) 00:20:39.083 fused_ordering(69) 00:20:39.083 fused_ordering(70) 00:20:39.083 fused_ordering(71) 00:20:39.083 fused_ordering(72) 00:20:39.083 fused_ordering(73) 00:20:39.083 fused_ordering(74) 00:20:39.083 fused_ordering(75) 00:20:39.083 fused_ordering(76) 00:20:39.083 fused_ordering(77) 00:20:39.083 fused_ordering(78) 00:20:39.083 fused_ordering(79) 00:20:39.083 fused_ordering(80) 00:20:39.083 fused_ordering(81) 00:20:39.083 fused_ordering(82) 00:20:39.083 fused_ordering(83) 00:20:39.083 fused_ordering(84) 00:20:39.083 fused_ordering(85) 00:20:39.083 fused_ordering(86) 00:20:39.083 fused_ordering(87) 00:20:39.083 fused_ordering(88) 00:20:39.083 fused_ordering(89) 00:20:39.083 fused_ordering(90) 00:20:39.083 fused_ordering(91) 00:20:39.083 fused_ordering(92) 00:20:39.083 fused_ordering(93) 00:20:39.083 fused_ordering(94) 00:20:39.083 fused_ordering(95) 00:20:39.083 fused_ordering(96) 00:20:39.083 fused_ordering(97) 00:20:39.083 fused_ordering(98) 00:20:39.083 fused_ordering(99) 00:20:39.083 fused_ordering(100) 00:20:39.083 fused_ordering(101) 00:20:39.083 fused_ordering(102) 00:20:39.083 fused_ordering(103) 00:20:39.083 fused_ordering(104) 00:20:39.083 fused_ordering(105) 00:20:39.083 fused_ordering(106) 00:20:39.083 fused_ordering(107) 00:20:39.083 fused_ordering(108) 00:20:39.083 fused_ordering(109) 00:20:39.083 fused_ordering(110) 00:20:39.083 fused_ordering(111) 00:20:39.083 fused_ordering(112) 00:20:39.083 fused_ordering(113) 00:20:39.083 fused_ordering(114) 00:20:39.083 fused_ordering(115) 00:20:39.083 fused_ordering(116) 00:20:39.083 fused_ordering(117) 00:20:39.083 fused_ordering(118) 00:20:39.083 fused_ordering(119) 00:20:39.083 fused_ordering(120) 00:20:39.083 fused_ordering(121) 00:20:39.083 fused_ordering(122) 00:20:39.083 fused_ordering(123) 00:20:39.083 fused_ordering(124) 00:20:39.083 fused_ordering(125) 00:20:39.083 fused_ordering(126) 00:20:39.083 fused_ordering(127) 00:20:39.083 fused_ordering(128) 00:20:39.083 fused_ordering(129) 00:20:39.083 fused_ordering(130) 00:20:39.083 fused_ordering(131) 00:20:39.083 fused_ordering(132) 00:20:39.084 fused_ordering(133) 00:20:39.084 fused_ordering(134) 00:20:39.084 fused_ordering(135) 00:20:39.084 fused_ordering(136) 00:20:39.084 fused_ordering(137) 00:20:39.084 fused_ordering(138) 00:20:39.084 fused_ordering(139) 00:20:39.084 fused_ordering(140) 00:20:39.084 fused_ordering(141) 00:20:39.084 fused_ordering(142) 00:20:39.084 fused_ordering(143) 00:20:39.084 fused_ordering(144) 00:20:39.084 fused_ordering(145) 00:20:39.084 fused_ordering(146) 00:20:39.084 fused_ordering(147) 00:20:39.084 fused_ordering(148) 00:20:39.084 fused_ordering(149) 00:20:39.084 fused_ordering(150) 00:20:39.084 fused_ordering(151) 00:20:39.084 fused_ordering(152) 00:20:39.084 fused_ordering(153) 00:20:39.084 fused_ordering(154) 00:20:39.084 fused_ordering(155) 00:20:39.084 fused_ordering(156) 00:20:39.084 fused_ordering(157) 00:20:39.084 fused_ordering(158) 00:20:39.084 fused_ordering(159) 00:20:39.084 fused_ordering(160) 00:20:39.084 fused_ordering(161) 00:20:39.084 fused_ordering(162) 00:20:39.084 fused_ordering(163) 00:20:39.084 fused_ordering(164) 00:20:39.084 fused_ordering(165) 00:20:39.084 fused_ordering(166) 00:20:39.084 fused_ordering(167) 00:20:39.084 fused_ordering(168) 00:20:39.084 fused_ordering(169) 00:20:39.084 fused_ordering(170) 00:20:39.084 fused_ordering(171) 00:20:39.084 fused_ordering(172) 00:20:39.084 fused_ordering(173) 00:20:39.084 fused_ordering(174) 00:20:39.084 fused_ordering(175) 00:20:39.084 fused_ordering(176) 00:20:39.084 fused_ordering(177) 00:20:39.084 fused_ordering(178) 00:20:39.084 fused_ordering(179) 00:20:39.084 fused_ordering(180) 00:20:39.084 fused_ordering(181) 00:20:39.084 fused_ordering(182) 00:20:39.084 fused_ordering(183) 00:20:39.084 fused_ordering(184) 00:20:39.084 fused_ordering(185) 00:20:39.084 fused_ordering(186) 00:20:39.084 fused_ordering(187) 00:20:39.084 fused_ordering(188) 00:20:39.084 fused_ordering(189) 00:20:39.084 fused_ordering(190) 00:20:39.084 fused_ordering(191) 00:20:39.084 fused_ordering(192) 00:20:39.084 fused_ordering(193) 00:20:39.084 fused_ordering(194) 00:20:39.084 fused_ordering(195) 00:20:39.084 fused_ordering(196) 00:20:39.084 fused_ordering(197) 00:20:39.084 fused_ordering(198) 00:20:39.084 fused_ordering(199) 00:20:39.084 fused_ordering(200) 00:20:39.084 fused_ordering(201) 00:20:39.084 fused_ordering(202) 00:20:39.084 fused_ordering(203) 00:20:39.084 fused_ordering(204) 00:20:39.084 fused_ordering(205) 00:20:39.345 fused_ordering(206) 00:20:39.345 fused_ordering(207) 00:20:39.345 fused_ordering(208) 00:20:39.345 fused_ordering(209) 00:20:39.345 fused_ordering(210) 00:20:39.345 fused_ordering(211) 00:20:39.345 fused_ordering(212) 00:20:39.345 fused_ordering(213) 00:20:39.345 fused_ordering(214) 00:20:39.345 fused_ordering(215) 00:20:39.345 fused_ordering(216) 00:20:39.345 fused_ordering(217) 00:20:39.345 fused_ordering(218) 00:20:39.345 fused_ordering(219) 00:20:39.345 fused_ordering(220) 00:20:39.345 fused_ordering(221) 00:20:39.345 fused_ordering(222) 00:20:39.345 fused_ordering(223) 00:20:39.345 fused_ordering(224) 00:20:39.345 fused_ordering(225) 00:20:39.345 fused_ordering(226) 00:20:39.345 fused_ordering(227) 00:20:39.345 fused_ordering(228) 00:20:39.345 fused_ordering(229) 00:20:39.345 fused_ordering(230) 00:20:39.345 fused_ordering(231) 00:20:39.345 fused_ordering(232) 00:20:39.345 fused_ordering(233) 00:20:39.345 fused_ordering(234) 00:20:39.345 fused_ordering(235) 00:20:39.345 fused_ordering(236) 00:20:39.345 fused_ordering(237) 00:20:39.345 fused_ordering(238) 00:20:39.345 fused_ordering(239) 00:20:39.345 fused_ordering(240) 00:20:39.345 fused_ordering(241) 00:20:39.345 fused_ordering(242) 00:20:39.345 fused_ordering(243) 00:20:39.345 fused_ordering(244) 00:20:39.345 fused_ordering(245) 00:20:39.345 fused_ordering(246) 00:20:39.345 fused_ordering(247) 00:20:39.345 fused_ordering(248) 00:20:39.345 fused_ordering(249) 00:20:39.345 fused_ordering(250) 00:20:39.345 fused_ordering(251) 00:20:39.345 fused_ordering(252) 00:20:39.345 fused_ordering(253) 00:20:39.345 fused_ordering(254) 00:20:39.345 fused_ordering(255) 00:20:39.345 fused_ordering(256) 00:20:39.345 fused_ordering(257) 00:20:39.345 fused_ordering(258) 00:20:39.345 fused_ordering(259) 00:20:39.345 fused_ordering(260) 00:20:39.345 fused_ordering(261) 00:20:39.345 fused_ordering(262) 00:20:39.345 fused_ordering(263) 00:20:39.345 fused_ordering(264) 00:20:39.345 fused_ordering(265) 00:20:39.345 fused_ordering(266) 00:20:39.345 fused_ordering(267) 00:20:39.345 fused_ordering(268) 00:20:39.345 fused_ordering(269) 00:20:39.345 fused_ordering(270) 00:20:39.345 fused_ordering(271) 00:20:39.345 fused_ordering(272) 00:20:39.345 fused_ordering(273) 00:20:39.345 fused_ordering(274) 00:20:39.345 fused_ordering(275) 00:20:39.345 fused_ordering(276) 00:20:39.345 fused_ordering(277) 00:20:39.345 fused_ordering(278) 00:20:39.345 fused_ordering(279) 00:20:39.345 fused_ordering(280) 00:20:39.345 fused_ordering(281) 00:20:39.345 fused_ordering(282) 00:20:39.345 fused_ordering(283) 00:20:39.345 fused_ordering(284) 00:20:39.345 fused_ordering(285) 00:20:39.345 fused_ordering(286) 00:20:39.345 fused_ordering(287) 00:20:39.345 fused_ordering(288) 00:20:39.345 fused_ordering(289) 00:20:39.345 fused_ordering(290) 00:20:39.345 fused_ordering(291) 00:20:39.345 fused_ordering(292) 00:20:39.345 fused_ordering(293) 00:20:39.345 fused_ordering(294) 00:20:39.345 fused_ordering(295) 00:20:39.345 fused_ordering(296) 00:20:39.345 fused_ordering(297) 00:20:39.345 fused_ordering(298) 00:20:39.345 fused_ordering(299) 00:20:39.345 fused_ordering(300) 00:20:39.345 fused_ordering(301) 00:20:39.345 fused_ordering(302) 00:20:39.345 fused_ordering(303) 00:20:39.345 fused_ordering(304) 00:20:39.345 fused_ordering(305) 00:20:39.345 fused_ordering(306) 00:20:39.345 fused_ordering(307) 00:20:39.345 fused_ordering(308) 00:20:39.345 fused_ordering(309) 00:20:39.345 fused_ordering(310) 00:20:39.345 fused_ordering(311) 00:20:39.345 fused_ordering(312) 00:20:39.345 fused_ordering(313) 00:20:39.345 fused_ordering(314) 00:20:39.345 fused_ordering(315) 00:20:39.345 fused_ordering(316) 00:20:39.345 fused_ordering(317) 00:20:39.345 fused_ordering(318) 00:20:39.345 fused_ordering(319) 00:20:39.345 fused_ordering(320) 00:20:39.345 fused_ordering(321) 00:20:39.345 fused_ordering(322) 00:20:39.345 fused_ordering(323) 00:20:39.345 fused_ordering(324) 00:20:39.345 fused_ordering(325) 00:20:39.345 fused_ordering(326) 00:20:39.345 fused_ordering(327) 00:20:39.345 fused_ordering(328) 00:20:39.345 fused_ordering(329) 00:20:39.345 fused_ordering(330) 00:20:39.345 fused_ordering(331) 00:20:39.345 fused_ordering(332) 00:20:39.345 fused_ordering(333) 00:20:39.345 fused_ordering(334) 00:20:39.345 fused_ordering(335) 00:20:39.345 fused_ordering(336) 00:20:39.345 fused_ordering(337) 00:20:39.345 fused_ordering(338) 00:20:39.345 fused_ordering(339) 00:20:39.345 fused_ordering(340) 00:20:39.345 fused_ordering(341) 00:20:39.345 fused_ordering(342) 00:20:39.345 fused_ordering(343) 00:20:39.345 fused_ordering(344) 00:20:39.345 fused_ordering(345) 00:20:39.345 fused_ordering(346) 00:20:39.345 fused_ordering(347) 00:20:39.345 fused_ordering(348) 00:20:39.345 fused_ordering(349) 00:20:39.345 fused_ordering(350) 00:20:39.345 fused_ordering(351) 00:20:39.345 fused_ordering(352) 00:20:39.345 fused_ordering(353) 00:20:39.345 fused_ordering(354) 00:20:39.345 fused_ordering(355) 00:20:39.345 fused_ordering(356) 00:20:39.345 fused_ordering(357) 00:20:39.345 fused_ordering(358) 00:20:39.345 fused_ordering(359) 00:20:39.345 fused_ordering(360) 00:20:39.345 fused_ordering(361) 00:20:39.345 fused_ordering(362) 00:20:39.345 fused_ordering(363) 00:20:39.345 fused_ordering(364) 00:20:39.345 fused_ordering(365) 00:20:39.345 fused_ordering(366) 00:20:39.345 fused_ordering(367) 00:20:39.345 fused_ordering(368) 00:20:39.345 fused_ordering(369) 00:20:39.345 fused_ordering(370) 00:20:39.345 fused_ordering(371) 00:20:39.345 fused_ordering(372) 00:20:39.345 fused_ordering(373) 00:20:39.345 fused_ordering(374) 00:20:39.345 fused_ordering(375) 00:20:39.345 fused_ordering(376) 00:20:39.345 fused_ordering(377) 00:20:39.345 fused_ordering(378) 00:20:39.345 fused_ordering(379) 00:20:39.345 fused_ordering(380) 00:20:39.345 fused_ordering(381) 00:20:39.345 fused_ordering(382) 00:20:39.345 fused_ordering(383) 00:20:39.345 fused_ordering(384) 00:20:39.345 fused_ordering(385) 00:20:39.345 fused_ordering(386) 00:20:39.346 fused_ordering(387) 00:20:39.346 fused_ordering(388) 00:20:39.346 fused_ordering(389) 00:20:39.346 fused_ordering(390) 00:20:39.346 fused_ordering(391) 00:20:39.346 fused_ordering(392) 00:20:39.346 fused_ordering(393) 00:20:39.346 fused_ordering(394) 00:20:39.346 fused_ordering(395) 00:20:39.346 fused_ordering(396) 00:20:39.346 fused_ordering(397) 00:20:39.346 fused_ordering(398) 00:20:39.346 fused_ordering(399) 00:20:39.346 fused_ordering(400) 00:20:39.346 fused_ordering(401) 00:20:39.346 fused_ordering(402) 00:20:39.346 fused_ordering(403) 00:20:39.346 fused_ordering(404) 00:20:39.346 fused_ordering(405) 00:20:39.346 fused_ordering(406) 00:20:39.346 fused_ordering(407) 00:20:39.346 fused_ordering(408) 00:20:39.346 fused_ordering(409) 00:20:39.346 fused_ordering(410) 00:20:39.606 fused_ordering(411) 00:20:39.606 fused_ordering(412) 00:20:39.606 fused_ordering(413) 00:20:39.606 fused_ordering(414) 00:20:39.606 fused_ordering(415) 00:20:39.606 fused_ordering(416) 00:20:39.606 fused_ordering(417) 00:20:39.606 fused_ordering(418) 00:20:39.606 fused_ordering(419) 00:20:39.606 fused_ordering(420) 00:20:39.606 fused_ordering(421) 00:20:39.606 fused_ordering(422) 00:20:39.606 fused_ordering(423) 00:20:39.606 fused_ordering(424) 00:20:39.606 fused_ordering(425) 00:20:39.606 fused_ordering(426) 00:20:39.606 fused_ordering(427) 00:20:39.606 fused_ordering(428) 00:20:39.606 fused_ordering(429) 00:20:39.606 fused_ordering(430) 00:20:39.606 fused_ordering(431) 00:20:39.606 fused_ordering(432) 00:20:39.606 fused_ordering(433) 00:20:39.606 fused_ordering(434) 00:20:39.606 fused_ordering(435) 00:20:39.606 fused_ordering(436) 00:20:39.606 fused_ordering(437) 00:20:39.606 fused_ordering(438) 00:20:39.606 fused_ordering(439) 00:20:39.606 fused_ordering(440) 00:20:39.606 fused_ordering(441) 00:20:39.606 fused_ordering(442) 00:20:39.606 fused_ordering(443) 00:20:39.606 fused_ordering(444) 00:20:39.606 fused_ordering(445) 00:20:39.606 fused_ordering(446) 00:20:39.606 fused_ordering(447) 00:20:39.606 fused_ordering(448) 00:20:39.606 fused_ordering(449) 00:20:39.606 fused_ordering(450) 00:20:39.606 fused_ordering(451) 00:20:39.606 fused_ordering(452) 00:20:39.606 fused_ordering(453) 00:20:39.606 fused_ordering(454) 00:20:39.606 fused_ordering(455) 00:20:39.606 fused_ordering(456) 00:20:39.606 fused_ordering(457) 00:20:39.606 fused_ordering(458) 00:20:39.606 fused_ordering(459) 00:20:39.606 fused_ordering(460) 00:20:39.606 fused_ordering(461) 00:20:39.606 fused_ordering(462) 00:20:39.606 fused_ordering(463) 00:20:39.606 fused_ordering(464) 00:20:39.606 fused_ordering(465) 00:20:39.606 fused_ordering(466) 00:20:39.606 fused_ordering(467) 00:20:39.606 fused_ordering(468) 00:20:39.606 fused_ordering(469) 00:20:39.606 fused_ordering(470) 00:20:39.606 fused_ordering(471) 00:20:39.606 fused_ordering(472) 00:20:39.606 fused_ordering(473) 00:20:39.606 fused_ordering(474) 00:20:39.606 fused_ordering(475) 00:20:39.606 fused_ordering(476) 00:20:39.606 fused_ordering(477) 00:20:39.606 fused_ordering(478) 00:20:39.606 fused_ordering(479) 00:20:39.606 fused_ordering(480) 00:20:39.606 fused_ordering(481) 00:20:39.606 fused_ordering(482) 00:20:39.606 fused_ordering(483) 00:20:39.606 fused_ordering(484) 00:20:39.606 fused_ordering(485) 00:20:39.606 fused_ordering(486) 00:20:39.606 fused_ordering(487) 00:20:39.606 fused_ordering(488) 00:20:39.606 fused_ordering(489) 00:20:39.606 fused_ordering(490) 00:20:39.606 fused_ordering(491) 00:20:39.606 fused_ordering(492) 00:20:39.606 fused_ordering(493) 00:20:39.606 fused_ordering(494) 00:20:39.606 fused_ordering(495) 00:20:39.606 fused_ordering(496) 00:20:39.606 fused_ordering(497) 00:20:39.606 fused_ordering(498) 00:20:39.606 fused_ordering(499) 00:20:39.606 fused_ordering(500) 00:20:39.606 fused_ordering(501) 00:20:39.606 fused_ordering(502) 00:20:39.606 fused_ordering(503) 00:20:39.606 fused_ordering(504) 00:20:39.606 fused_ordering(505) 00:20:39.606 fused_ordering(506) 00:20:39.606 fused_ordering(507) 00:20:39.606 fused_ordering(508) 00:20:39.606 fused_ordering(509) 00:20:39.606 fused_ordering(510) 00:20:39.606 fused_ordering(511) 00:20:39.606 fused_ordering(512) 00:20:39.606 fused_ordering(513) 00:20:39.606 fused_ordering(514) 00:20:39.606 fused_ordering(515) 00:20:39.606 fused_ordering(516) 00:20:39.606 fused_ordering(517) 00:20:39.606 fused_ordering(518) 00:20:39.606 fused_ordering(519) 00:20:39.606 fused_ordering(520) 00:20:39.606 fused_ordering(521) 00:20:39.606 fused_ordering(522) 00:20:39.606 fused_ordering(523) 00:20:39.606 fused_ordering(524) 00:20:39.606 fused_ordering(525) 00:20:39.606 fused_ordering(526) 00:20:39.606 fused_ordering(527) 00:20:39.606 fused_ordering(528) 00:20:39.606 fused_ordering(529) 00:20:39.606 fused_ordering(530) 00:20:39.606 fused_ordering(531) 00:20:39.606 fused_ordering(532) 00:20:39.606 fused_ordering(533) 00:20:39.606 fused_ordering(534) 00:20:39.606 fused_ordering(535) 00:20:39.606 fused_ordering(536) 00:20:39.606 fused_ordering(537) 00:20:39.606 fused_ordering(538) 00:20:39.606 fused_ordering(539) 00:20:39.606 fused_ordering(540) 00:20:39.606 fused_ordering(541) 00:20:39.606 fused_ordering(542) 00:20:39.606 fused_ordering(543) 00:20:39.606 fused_ordering(544) 00:20:39.606 fused_ordering(545) 00:20:39.606 fused_ordering(546) 00:20:39.606 fused_ordering(547) 00:20:39.606 fused_ordering(548) 00:20:39.606 fused_ordering(549) 00:20:39.606 fused_ordering(550) 00:20:39.606 fused_ordering(551) 00:20:39.606 fused_ordering(552) 00:20:39.606 fused_ordering(553) 00:20:39.606 fused_ordering(554) 00:20:39.606 fused_ordering(555) 00:20:39.606 fused_ordering(556) 00:20:39.606 fused_ordering(557) 00:20:39.606 fused_ordering(558) 00:20:39.606 fused_ordering(559) 00:20:39.606 fused_ordering(560) 00:20:39.606 fused_ordering(561) 00:20:39.606 fused_ordering(562) 00:20:39.606 fused_ordering(563) 00:20:39.606 fused_ordering(564) 00:20:39.606 fused_ordering(565) 00:20:39.606 fused_ordering(566) 00:20:39.606 fused_ordering(567) 00:20:39.606 fused_ordering(568) 00:20:39.606 fused_ordering(569) 00:20:39.606 fused_ordering(570) 00:20:39.606 fused_ordering(571) 00:20:39.606 fused_ordering(572) 00:20:39.606 fused_ordering(573) 00:20:39.606 fused_ordering(574) 00:20:39.606 fused_ordering(575) 00:20:39.606 fused_ordering(576) 00:20:39.606 fused_ordering(577) 00:20:39.606 fused_ordering(578) 00:20:39.606 fused_ordering(579) 00:20:39.606 fused_ordering(580) 00:20:39.606 fused_ordering(581) 00:20:39.606 fused_ordering(582) 00:20:39.606 fused_ordering(583) 00:20:39.606 fused_ordering(584) 00:20:39.606 fused_ordering(585) 00:20:39.606 fused_ordering(586) 00:20:39.606 fused_ordering(587) 00:20:39.606 fused_ordering(588) 00:20:39.606 fused_ordering(589) 00:20:39.606 fused_ordering(590) 00:20:39.606 fused_ordering(591) 00:20:39.606 fused_ordering(592) 00:20:39.606 fused_ordering(593) 00:20:39.606 fused_ordering(594) 00:20:39.606 fused_ordering(595) 00:20:39.606 fused_ordering(596) 00:20:39.606 fused_ordering(597) 00:20:39.606 fused_ordering(598) 00:20:39.606 fused_ordering(599) 00:20:39.606 fused_ordering(600) 00:20:39.606 fused_ordering(601) 00:20:39.606 fused_ordering(602) 00:20:39.606 fused_ordering(603) 00:20:39.606 fused_ordering(604) 00:20:39.606 fused_ordering(605) 00:20:39.606 fused_ordering(606) 00:20:39.606 fused_ordering(607) 00:20:39.606 fused_ordering(608) 00:20:39.606 fused_ordering(609) 00:20:39.606 fused_ordering(610) 00:20:39.606 fused_ordering(611) 00:20:39.606 fused_ordering(612) 00:20:39.606 fused_ordering(613) 00:20:39.606 fused_ordering(614) 00:20:39.607 fused_ordering(615) 00:20:40.172 fused_ordering(616) 00:20:40.172 fused_ordering(617) 00:20:40.172 fused_ordering(618) 00:20:40.172 fused_ordering(619) 00:20:40.172 fused_ordering(620) 00:20:40.172 fused_ordering(621) 00:20:40.172 fused_ordering(622) 00:20:40.172 fused_ordering(623) 00:20:40.172 fused_ordering(624) 00:20:40.172 fused_ordering(625) 00:20:40.172 fused_ordering(626) 00:20:40.172 fused_ordering(627) 00:20:40.172 fused_ordering(628) 00:20:40.172 fused_ordering(629) 00:20:40.172 fused_ordering(630) 00:20:40.172 fused_ordering(631) 00:20:40.172 fused_ordering(632) 00:20:40.172 fused_ordering(633) 00:20:40.172 fused_ordering(634) 00:20:40.172 fused_ordering(635) 00:20:40.172 fused_ordering(636) 00:20:40.172 fused_ordering(637) 00:20:40.172 fused_ordering(638) 00:20:40.172 fused_ordering(639) 00:20:40.172 fused_ordering(640) 00:20:40.172 fused_ordering(641) 00:20:40.172 fused_ordering(642) 00:20:40.172 fused_ordering(643) 00:20:40.172 fused_ordering(644) 00:20:40.172 fused_ordering(645) 00:20:40.172 fused_ordering(646) 00:20:40.172 fused_ordering(647) 00:20:40.172 fused_ordering(648) 00:20:40.172 fused_ordering(649) 00:20:40.172 fused_ordering(650) 00:20:40.172 fused_ordering(651) 00:20:40.172 fused_ordering(652) 00:20:40.172 fused_ordering(653) 00:20:40.172 fused_ordering(654) 00:20:40.172 fused_ordering(655) 00:20:40.172 fused_ordering(656) 00:20:40.172 fused_ordering(657) 00:20:40.172 fused_ordering(658) 00:20:40.172 fused_ordering(659) 00:20:40.172 fused_ordering(660) 00:20:40.172 fused_ordering(661) 00:20:40.172 fused_ordering(662) 00:20:40.172 fused_ordering(663) 00:20:40.172 fused_ordering(664) 00:20:40.172 fused_ordering(665) 00:20:40.172 fused_ordering(666) 00:20:40.172 fused_ordering(667) 00:20:40.172 fused_ordering(668) 00:20:40.172 fused_ordering(669) 00:20:40.172 fused_ordering(670) 00:20:40.172 fused_ordering(671) 00:20:40.172 fused_ordering(672) 00:20:40.172 fused_ordering(673) 00:20:40.172 fused_ordering(674) 00:20:40.172 fused_ordering(675) 00:20:40.172 fused_ordering(676) 00:20:40.172 fused_ordering(677) 00:20:40.172 fused_ordering(678) 00:20:40.172 fused_ordering(679) 00:20:40.172 fused_ordering(680) 00:20:40.172 fused_ordering(681) 00:20:40.172 fused_ordering(682) 00:20:40.172 fused_ordering(683) 00:20:40.172 fused_ordering(684) 00:20:40.172 fused_ordering(685) 00:20:40.172 fused_ordering(686) 00:20:40.172 fused_ordering(687) 00:20:40.172 fused_ordering(688) 00:20:40.172 fused_ordering(689) 00:20:40.172 fused_ordering(690) 00:20:40.172 fused_ordering(691) 00:20:40.172 fused_ordering(692) 00:20:40.172 fused_ordering(693) 00:20:40.172 fused_ordering(694) 00:20:40.172 fused_ordering(695) 00:20:40.172 fused_ordering(696) 00:20:40.172 fused_ordering(697) 00:20:40.172 fused_ordering(698) 00:20:40.172 fused_ordering(699) 00:20:40.172 fused_ordering(700) 00:20:40.172 fused_ordering(701) 00:20:40.172 fused_ordering(702) 00:20:40.172 fused_ordering(703) 00:20:40.172 fused_ordering(704) 00:20:40.172 fused_ordering(705) 00:20:40.172 fused_ordering(706) 00:20:40.172 fused_ordering(707) 00:20:40.172 fused_ordering(708) 00:20:40.172 fused_ordering(709) 00:20:40.172 fused_ordering(710) 00:20:40.172 fused_ordering(711) 00:20:40.172 fused_ordering(712) 00:20:40.172 fused_ordering(713) 00:20:40.172 fused_ordering(714) 00:20:40.172 fused_ordering(715) 00:20:40.172 fused_ordering(716) 00:20:40.172 fused_ordering(717) 00:20:40.172 fused_ordering(718) 00:20:40.172 fused_ordering(719) 00:20:40.172 fused_ordering(720) 00:20:40.172 fused_ordering(721) 00:20:40.172 fused_ordering(722) 00:20:40.172 fused_ordering(723) 00:20:40.172 fused_ordering(724) 00:20:40.172 fused_ordering(725) 00:20:40.172 fused_ordering(726) 00:20:40.172 fused_ordering(727) 00:20:40.172 fused_ordering(728) 00:20:40.172 fused_ordering(729) 00:20:40.172 fused_ordering(730) 00:20:40.172 fused_ordering(731) 00:20:40.172 fused_ordering(732) 00:20:40.172 fused_ordering(733) 00:20:40.172 fused_ordering(734) 00:20:40.172 fused_ordering(735) 00:20:40.172 fused_ordering(736) 00:20:40.172 fused_ordering(737) 00:20:40.172 fused_ordering(738) 00:20:40.172 fused_ordering(739) 00:20:40.173 fused_ordering(740) 00:20:40.173 fused_ordering(741) 00:20:40.173 fused_ordering(742) 00:20:40.173 fused_ordering(743) 00:20:40.173 fused_ordering(744) 00:20:40.173 fused_ordering(745) 00:20:40.173 fused_ordering(746) 00:20:40.173 fused_ordering(747) 00:20:40.173 fused_ordering(748) 00:20:40.173 fused_ordering(749) 00:20:40.173 fused_ordering(750) 00:20:40.173 fused_ordering(751) 00:20:40.173 fused_ordering(752) 00:20:40.173 fused_ordering(753) 00:20:40.173 fused_ordering(754) 00:20:40.173 fused_ordering(755) 00:20:40.173 fused_ordering(756) 00:20:40.173 fused_ordering(757) 00:20:40.173 fused_ordering(758) 00:20:40.173 fused_ordering(759) 00:20:40.173 fused_ordering(760) 00:20:40.173 fused_ordering(761) 00:20:40.173 fused_ordering(762) 00:20:40.173 fused_ordering(763) 00:20:40.173 fused_ordering(764) 00:20:40.173 fused_ordering(765) 00:20:40.173 fused_ordering(766) 00:20:40.173 fused_ordering(767) 00:20:40.173 fused_ordering(768) 00:20:40.173 fused_ordering(769) 00:20:40.173 fused_ordering(770) 00:20:40.173 fused_ordering(771) 00:20:40.173 fused_ordering(772) 00:20:40.173 fused_ordering(773) 00:20:40.173 fused_ordering(774) 00:20:40.173 fused_ordering(775) 00:20:40.173 fused_ordering(776) 00:20:40.173 fused_ordering(777) 00:20:40.173 fused_ordering(778) 00:20:40.173 fused_ordering(779) 00:20:40.173 fused_ordering(780) 00:20:40.173 fused_ordering(781) 00:20:40.173 fused_ordering(782) 00:20:40.173 fused_ordering(783) 00:20:40.173 fused_ordering(784) 00:20:40.173 fused_ordering(785) 00:20:40.173 fused_ordering(786) 00:20:40.173 fused_ordering(787) 00:20:40.173 fused_ordering(788) 00:20:40.173 fused_ordering(789) 00:20:40.173 fused_ordering(790) 00:20:40.173 fused_ordering(791) 00:20:40.173 fused_ordering(792) 00:20:40.173 fused_ordering(793) 00:20:40.173 fused_ordering(794) 00:20:40.173 fused_ordering(795) 00:20:40.173 fused_ordering(796) 00:20:40.173 fused_ordering(797) 00:20:40.173 fused_ordering(798) 00:20:40.173 fused_ordering(799) 00:20:40.173 fused_ordering(800) 00:20:40.173 fused_ordering(801) 00:20:40.173 fused_ordering(802) 00:20:40.173 fused_ordering(803) 00:20:40.173 fused_ordering(804) 00:20:40.173 fused_ordering(805) 00:20:40.173 fused_ordering(806) 00:20:40.173 fused_ordering(807) 00:20:40.173 fused_ordering(808) 00:20:40.173 fused_ordering(809) 00:20:40.173 fused_ordering(810) 00:20:40.173 fused_ordering(811) 00:20:40.173 fused_ordering(812) 00:20:40.173 fused_ordering(813) 00:20:40.173 fused_ordering(814) 00:20:40.173 fused_ordering(815) 00:20:40.173 fused_ordering(816) 00:20:40.173 fused_ordering(817) 00:20:40.173 fused_ordering(818) 00:20:40.173 fused_ordering(819) 00:20:40.173 fused_ordering(820) 00:20:40.431 fused_ordering(821) 00:20:40.431 fused_ordering(822) 00:20:40.431 fused_ordering(823) 00:20:40.431 fused_ordering(824) 00:20:40.431 fused_ordering(825) 00:20:40.431 fused_ordering(826) 00:20:40.431 fused_ordering(827) 00:20:40.431 fused_ordering(828) 00:20:40.431 fused_ordering(829) 00:20:40.431 fused_ordering(830) 00:20:40.431 fused_ordering(831) 00:20:40.431 fused_ordering(832) 00:20:40.431 fused_ordering(833) 00:20:40.431 fused_ordering(834) 00:20:40.431 fused_ordering(835) 00:20:40.431 fused_ordering(836) 00:20:40.431 fused_ordering(837) 00:20:40.431 fused_ordering(838) 00:20:40.431 fused_ordering(839) 00:20:40.431 fused_ordering(840) 00:20:40.431 fused_ordering(841) 00:20:40.431 fused_ordering(842) 00:20:40.431 fused_ordering(843) 00:20:40.431 fused_ordering(844) 00:20:40.431 fused_ordering(845) 00:20:40.431 fused_ordering(846) 00:20:40.431 fused_ordering(847) 00:20:40.431 fused_ordering(848) 00:20:40.431 fused_ordering(849) 00:20:40.431 fused_ordering(850) 00:20:40.431 fused_ordering(851) 00:20:40.431 fused_ordering(852) 00:20:40.431 fused_ordering(853) 00:20:40.431 fused_ordering(854) 00:20:40.431 fused_ordering(855) 00:20:40.431 fused_ordering(856) 00:20:40.431 fused_ordering(857) 00:20:40.431 fused_ordering(858) 00:20:40.431 fused_ordering(859) 00:20:40.431 fused_ordering(860) 00:20:40.431 fused_ordering(861) 00:20:40.431 fused_ordering(862) 00:20:40.431 fused_ordering(863) 00:20:40.431 fused_ordering(864) 00:20:40.431 fused_ordering(865) 00:20:40.431 fused_ordering(866) 00:20:40.431 fused_ordering(867) 00:20:40.431 fused_ordering(868) 00:20:40.431 fused_ordering(869) 00:20:40.431 fused_ordering(870) 00:20:40.431 fused_ordering(871) 00:20:40.432 fused_ordering(872) 00:20:40.432 fused_ordering(873) 00:20:40.432 fused_ordering(874) 00:20:40.432 fused_ordering(875) 00:20:40.432 fused_ordering(876) 00:20:40.432 fused_ordering(877) 00:20:40.432 fused_ordering(878) 00:20:40.432 fused_ordering(879) 00:20:40.432 fused_ordering(880) 00:20:40.432 fused_ordering(881) 00:20:40.432 fused_ordering(882) 00:20:40.432 fused_ordering(883) 00:20:40.432 fused_ordering(884) 00:20:40.432 fused_ordering(885) 00:20:40.432 fused_ordering(886) 00:20:40.432 fused_ordering(887) 00:20:40.432 fused_ordering(888) 00:20:40.432 fused_ordering(889) 00:20:40.432 fused_ordering(890) 00:20:40.432 fused_ordering(891) 00:20:40.432 fused_ordering(892) 00:20:40.432 fused_ordering(893) 00:20:40.432 fused_ordering(894) 00:20:40.432 fused_ordering(895) 00:20:40.432 fused_ordering(896) 00:20:40.432 fused_ordering(897) 00:20:40.432 fused_ordering(898) 00:20:40.432 fused_ordering(899) 00:20:40.432 fused_ordering(900) 00:20:40.432 fused_ordering(901) 00:20:40.432 fused_ordering(902) 00:20:40.432 fused_ordering(903) 00:20:40.432 fused_ordering(904) 00:20:40.432 fused_ordering(905) 00:20:40.432 fused_ordering(906) 00:20:40.432 fused_ordering(907) 00:20:40.432 fused_ordering(908) 00:20:40.432 fused_ordering(909) 00:20:40.432 fused_ordering(910) 00:20:40.432 fused_ordering(911) 00:20:40.432 fused_ordering(912) 00:20:40.432 fused_ordering(913) 00:20:40.432 fused_ordering(914) 00:20:40.432 fused_ordering(915) 00:20:40.432 fused_ordering(916) 00:20:40.432 fused_ordering(917) 00:20:40.432 fused_ordering(918) 00:20:40.432 fused_ordering(919) 00:20:40.432 fused_ordering(920) 00:20:40.432 fused_ordering(921) 00:20:40.432 fused_ordering(922) 00:20:40.432 fused_ordering(923) 00:20:40.432 fused_ordering(924) 00:20:40.432 fused_ordering(925) 00:20:40.432 fused_ordering(926) 00:20:40.432 fused_ordering(927) 00:20:40.432 fused_ordering(928) 00:20:40.432 fused_ordering(929) 00:20:40.432 fused_ordering(930) 00:20:40.432 fused_ordering(931) 00:20:40.432 fused_ordering(932) 00:20:40.432 fused_ordering(933) 00:20:40.432 fused_ordering(934) 00:20:40.432 fused_ordering(935) 00:20:40.432 fused_ordering(936) 00:20:40.432 fused_ordering(937) 00:20:40.432 fused_ordering(938) 00:20:40.432 fused_ordering(939) 00:20:40.432 fused_ordering(940) 00:20:40.432 fused_ordering(941) 00:20:40.432 fused_ordering(942) 00:20:40.432 fused_ordering(943) 00:20:40.432 fused_ordering(944) 00:20:40.432 fused_ordering(945) 00:20:40.432 fused_ordering(946) 00:20:40.432 fused_ordering(947) 00:20:40.432 fused_ordering(948) 00:20:40.432 fused_ordering(949) 00:20:40.432 fused_ordering(950) 00:20:40.432 fused_ordering(951) 00:20:40.432 fused_ordering(952) 00:20:40.432 fused_ordering(953) 00:20:40.432 fused_ordering(954) 00:20:40.432 fused_ordering(955) 00:20:40.432 fused_ordering(956) 00:20:40.432 fused_ordering(957) 00:20:40.432 fused_ordering(958) 00:20:40.432 fused_ordering(959) 00:20:40.432 fused_ordering(960) 00:20:40.432 fused_ordering(961) 00:20:40.432 fused_ordering(962) 00:20:40.432 fused_ordering(963) 00:20:40.432 fused_ordering(964) 00:20:40.432 fused_ordering(965) 00:20:40.432 fused_ordering(966) 00:20:40.432 fused_ordering(967) 00:20:40.432 fused_ordering(968) 00:20:40.432 fused_ordering(969) 00:20:40.432 fused_ordering(970) 00:20:40.432 fused_ordering(971) 00:20:40.432 fused_ordering(972) 00:20:40.432 fused_ordering(973) 00:20:40.432 fused_ordering(974) 00:20:40.432 fused_ordering(975) 00:20:40.432 fused_ordering(976) 00:20:40.432 fused_ordering(977) 00:20:40.432 fused_ordering(978) 00:20:40.432 fused_ordering(979) 00:20:40.432 fused_ordering(980) 00:20:40.432 fused_ordering(981) 00:20:40.432 fused_ordering(982) 00:20:40.432 fused_ordering(983) 00:20:40.432 fused_ordering(984) 00:20:40.432 fused_ordering(985) 00:20:40.432 fused_ordering(986) 00:20:40.432 fused_ordering(987) 00:20:40.432 fused_ordering(988) 00:20:40.432 fused_ordering(989) 00:20:40.432 fused_ordering(990) 00:20:40.432 fused_ordering(991) 00:20:40.432 fused_ordering(992) 00:20:40.432 fused_ordering(993) 00:20:40.432 fused_ordering(994) 00:20:40.432 fused_ordering(995) 00:20:40.432 fused_ordering(996) 00:20:40.432 fused_ordering(997) 00:20:40.432 fused_ordering(998) 00:20:40.432 fused_ordering(999) 00:20:40.432 fused_ordering(1000) 00:20:40.432 fused_ordering(1001) 00:20:40.432 fused_ordering(1002) 00:20:40.432 fused_ordering(1003) 00:20:40.432 fused_ordering(1004) 00:20:40.432 fused_ordering(1005) 00:20:40.432 fused_ordering(1006) 00:20:40.432 fused_ordering(1007) 00:20:40.432 fused_ordering(1008) 00:20:40.432 fused_ordering(1009) 00:20:40.432 fused_ordering(1010) 00:20:40.432 fused_ordering(1011) 00:20:40.432 fused_ordering(1012) 00:20:40.432 fused_ordering(1013) 00:20:40.432 fused_ordering(1014) 00:20:40.432 fused_ordering(1015) 00:20:40.432 fused_ordering(1016) 00:20:40.432 fused_ordering(1017) 00:20:40.432 fused_ordering(1018) 00:20:40.432 fused_ordering(1019) 00:20:40.432 fused_ordering(1020) 00:20:40.432 fused_ordering(1021) 00:20:40.432 fused_ordering(1022) 00:20:40.432 fused_ordering(1023) 00:20:40.432 20:15:38 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:40.432 20:15:38 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:40.432 20:15:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:40.432 20:15:38 -- nvmf/common.sh@116 -- # sync 00:20:40.432 20:15:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:40.432 20:15:38 -- nvmf/common.sh@119 -- # set +e 00:20:40.432 20:15:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:40.432 20:15:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:40.432 rmmod nvme_tcp 00:20:40.432 rmmod nvme_fabrics 00:20:40.432 rmmod nvme_keyring 00:20:40.432 20:15:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:40.432 20:15:38 -- nvmf/common.sh@123 -- # set -e 00:20:40.432 20:15:38 -- nvmf/common.sh@124 -- # return 0 00:20:40.432 20:15:38 -- nvmf/common.sh@477 -- # '[' -n 1540080 ']' 00:20:40.432 20:15:38 -- nvmf/common.sh@478 -- # killprocess 1540080 00:20:40.432 20:15:38 -- common/autotest_common.sh@926 -- # '[' -z 1540080 ']' 00:20:40.432 20:15:38 -- common/autotest_common.sh@930 -- # kill -0 1540080 00:20:40.432 20:15:38 -- common/autotest_common.sh@931 -- # uname 00:20:40.432 20:15:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:40.432 20:15:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1540080 00:20:40.432 20:15:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:40.433 20:15:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:40.433 20:15:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1540080' 00:20:40.433 killing process with pid 1540080 00:20:40.433 20:15:38 -- common/autotest_common.sh@945 -- # kill 1540080 00:20:40.433 20:15:38 -- common/autotest_common.sh@950 -- # wait 1540080 00:20:41.000 20:15:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:41.000 20:15:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:41.000 20:15:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:41.000 20:15:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.000 20:15:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:41.000 20:15:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.000 20:15:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.000 20:15:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.908 20:15:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:42.908 00:20:42.908 real 0m10.921s 00:20:42.908 user 0m5.903s 00:20:42.908 sys 0m5.066s 00:20:42.908 20:15:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.908 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:20:42.908 ************************************ 00:20:42.908 END TEST nvmf_fused_ordering 00:20:42.908 ************************************ 00:20:42.908 20:15:40 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:42.908 20:15:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:42.908 20:15:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:42.908 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:20:42.908 ************************************ 00:20:42.908 START TEST nvmf_delete_subsystem 00:20:42.908 ************************************ 00:20:42.908 20:15:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:43.169 * Looking for test storage... 00:20:43.169 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:43.169 20:15:40 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.169 20:15:40 -- nvmf/common.sh@7 -- # uname -s 00:20:43.169 20:15:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.169 20:15:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.169 20:15:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.169 20:15:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.169 20:15:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.169 20:15:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.169 20:15:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.169 20:15:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.169 20:15:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.169 20:15:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.169 20:15:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:43.169 20:15:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:43.170 20:15:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.170 20:15:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.170 20:15:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:43.170 20:15:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:43.170 20:15:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.170 20:15:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.170 20:15:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.170 20:15:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.170 20:15:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.170 20:15:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.170 20:15:40 -- paths/export.sh@5 -- # export PATH 00:20:43.170 20:15:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.170 20:15:40 -- nvmf/common.sh@46 -- # : 0 00:20:43.170 20:15:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:43.170 20:15:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:43.170 20:15:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:43.170 20:15:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.170 20:15:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.170 20:15:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:43.170 20:15:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:43.170 20:15:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:43.170 20:15:40 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:20:43.170 20:15:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:43.170 20:15:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.170 20:15:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:43.170 20:15:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:43.170 20:15:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:43.170 20:15:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.170 20:15:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.170 20:15:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.170 20:15:40 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:43.170 20:15:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:43.170 20:15:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:43.170 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:20:48.437 20:15:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:48.437 20:15:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:48.437 20:15:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:48.437 20:15:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:48.437 20:15:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:48.437 20:15:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:48.437 20:15:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:48.437 20:15:46 -- nvmf/common.sh@294 -- # net_devs=() 00:20:48.437 20:15:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:48.437 20:15:46 -- nvmf/common.sh@295 -- # e810=() 00:20:48.437 20:15:46 -- nvmf/common.sh@295 -- # local -ga e810 00:20:48.437 20:15:46 -- nvmf/common.sh@296 -- # x722=() 00:20:48.437 20:15:46 -- nvmf/common.sh@296 -- # local -ga x722 00:20:48.437 20:15:46 -- nvmf/common.sh@297 -- # mlx=() 00:20:48.437 20:15:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:48.437 20:15:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.437 20:15:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:48.437 20:15:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:48.437 20:15:46 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:48.437 20:15:46 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:48.437 20:15:46 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:48.437 20:15:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:48.437 20:15:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:48.437 20:15:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:48.437 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:48.437 20:15:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:48.437 20:15:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:48.437 20:15:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:48.438 20:15:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:48.438 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:48.438 20:15:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:48.438 20:15:46 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:48.438 20:15:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.438 20:15:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:48.438 20:15:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.438 20:15:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:48.438 Found net devices under 0000:27:00.0: cvl_0_0 00:20:48.438 20:15:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.438 20:15:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:48.438 20:15:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.438 20:15:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:48.438 20:15:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.438 20:15:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:48.438 Found net devices under 0000:27:00.1: cvl_0_1 00:20:48.438 20:15:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.438 20:15:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:48.438 20:15:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:48.438 20:15:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:48.438 20:15:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:48.438 20:15:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.438 20:15:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.438 20:15:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.438 20:15:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:48.438 20:15:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.438 20:15:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.438 20:15:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:48.438 20:15:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.438 20:15:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.438 20:15:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:48.438 20:15:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:48.438 20:15:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.438 20:15:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.438 20:15:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.438 20:15:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.438 20:15:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:48.438 20:15:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.697 20:15:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.697 20:15:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.697 20:15:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:48.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:20:48.697 00:20:48.697 --- 10.0.0.2 ping statistics --- 00:20:48.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.697 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:20:48.697 20:15:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:20:48.697 00:20:48.697 --- 10.0.0.1 ping statistics --- 00:20:48.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.697 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:20:48.697 20:15:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.697 20:15:46 -- nvmf/common.sh@410 -- # return 0 00:20:48.697 20:15:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:48.697 20:15:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.697 20:15:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:48.697 20:15:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:48.697 20:15:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.697 20:15:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:48.697 20:15:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:48.697 20:15:46 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:20:48.697 20:15:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:48.697 20:15:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:48.697 20:15:46 -- common/autotest_common.sh@10 -- # set +x 00:20:48.697 20:15:46 -- nvmf/common.sh@469 -- # nvmfpid=1544583 00:20:48.697 20:15:46 -- nvmf/common.sh@470 -- # waitforlisten 1544583 00:20:48.697 20:15:46 -- common/autotest_common.sh@819 -- # '[' -z 1544583 ']' 00:20:48.697 20:15:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.697 20:15:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:48.697 20:15:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.697 20:15:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:48.697 20:15:46 -- common/autotest_common.sh@10 -- # set +x 00:20:48.697 20:15:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:48.697 [2024-04-25 20:15:46.579983] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:48.697 [2024-04-25 20:15:46.580069] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.957 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.957 [2024-04-25 20:15:46.683580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:48.957 [2024-04-25 20:15:46.788800] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:48.957 [2024-04-25 20:15:46.789004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.957 [2024-04-25 20:15:46.789021] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.957 [2024-04-25 20:15:46.789031] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.957 [2024-04-25 20:15:46.789096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.957 [2024-04-25 20:15:46.789101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.527 20:15:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:49.527 20:15:47 -- common/autotest_common.sh@852 -- # return 0 00:20:49.527 20:15:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:49.527 20:15:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:49.527 20:15:47 -- common/autotest_common.sh@10 -- # set +x 00:20:49.527 20:15:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.527 20:15:47 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.527 20:15:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.527 20:15:47 -- common/autotest_common.sh@10 -- # set +x 00:20:49.527 [2024-04-25 20:15:47.322808] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.527 20:15:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.527 20:15:47 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:49.527 20:15:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.527 20:15:47 -- common/autotest_common.sh@10 -- # set +x 00:20:49.527 20:15:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.527 20:15:47 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.527 20:15:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.527 20:15:47 -- common/autotest_common.sh@10 -- # set +x 00:20:49.527 [2024-04-25 20:15:47.343031] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.527 20:15:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.527 20:15:47 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:49.527 20:15:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.527 20:15:47 -- common/autotest_common.sh@10 -- # set +x 00:20:49.527 NULL1 00:20:49.527 20:15:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.527 20:15:47 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:49.527 20:15:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.527 20:15:47 -- common/autotest_common.sh@10 -- # set +x 00:20:49.527 Delay0 00:20:49.527 20:15:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.527 20:15:47 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:49.527 20:15:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.527 20:15:47 -- common/autotest_common.sh@10 -- # set +x 00:20:49.527 20:15:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.527 20:15:47 -- target/delete_subsystem.sh@28 -- # perf_pid=1544896 00:20:49.527 20:15:47 -- target/delete_subsystem.sh@30 -- # sleep 2 00:20:49.527 20:15:47 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:49.527 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.527 [2024-04-25 20:15:47.454326] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:52.060 20:15:49 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.060 20:15:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:52.060 20:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:52.060 Write completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 starting I/O failed: -6 00:20:52.060 Write completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 starting I/O failed: -6 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 Write completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 starting I/O failed: -6 00:20:52.060 Write completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 Write completed with error (sct=0, sc=8) 00:20:52.060 starting I/O failed: -6 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.060 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 [2024-04-25 20:15:49.759799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61300000ffc0 is same with the state(5) to be set 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 starting I/O failed: -6 00:20:52.061 [2024-04-25 20:15:49.760761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002a40 is same with the state(5) to be set 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Write completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.061 Read completed with error (sct=0, sc=8) 00:20:52.999 [2024-04-25 20:15:50.716480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002180 is same with the state(5) to be set 00:20:52.999 Write completed with error (sct=0, sc=8) 00:20:52.999 Write completed with error (sct=0, sc=8) 00:20:52.999 Write completed with error (sct=0, sc=8) 00:20:52.999 Read completed with error (sct=0, sc=8) 00:20:52.999 Read completed with error (sct=0, sc=8) 00:20:52.999 Read completed with error (sct=0, sc=8) 00:20:52.999 Write completed with error (sct=0, sc=8) 00:20:52.999 Read completed with error (sct=0, sc=8) 00:20:52.999 Read completed with error (sct=0, sc=8) 00:20:52.999 Read completed with error (sct=0, sc=8) 00:20:52.999 Write completed with error (sct=0, sc=8) 00:20:52.999 Read completed with error (sct=0, sc=8) 00:20:52.999 Read completed with error (sct=0, sc=8) 00:20:52.999 Write completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 [2024-04-25 20:15:50.759833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000010a40 is same with the state(5) to be set 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 [2024-04-25 20:15:50.761186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000026c0 is same with the state(5) to be set 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 [2024-04-25 20:15:50.761355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002dc0 is same with the state(5) to be set 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 Read completed with error (sct=0, sc=8) 00:20:53.000 Write completed with error (sct=0, sc=8) 00:20:53.000 [2024-04-25 20:15:50.763249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000010340 is same with the state(5) to be set 00:20:53.000 [2024-04-25 20:15:50.764116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000002180 (9): Bad file descriptor 00:20:53.000 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:53.000 20:15:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.000 20:15:50 -- target/delete_subsystem.sh@34 -- # delay=0 00:20:53.000 20:15:50 -- target/delete_subsystem.sh@35 -- # kill -0 1544896 00:20:53.000 20:15:50 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:20:53.000 Initializing NVMe Controllers 00:20:53.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.000 Controller IO queue size 128, less than required. 00:20:53.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:53.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:53.000 Initialization complete. Launching workers. 00:20:53.000 ======================================================== 00:20:53.000 Latency(us) 00:20:53.000 Device Information : IOPS MiB/s Average min max 00:20:53.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.81 0.08 901425.50 398.59 1012707.13 00:20:53.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.81 0.08 901191.38 543.78 1012964.47 00:20:53.000 ======================================================== 00:20:53.000 Total : 333.61 0.16 901308.44 398.59 1012964.47 00:20:53.000 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@35 -- # kill -0 1544896 00:20:53.567 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1544896) - No such process 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@45 -- # NOT wait 1544896 00:20:53.567 20:15:51 -- common/autotest_common.sh@640 -- # local es=0 00:20:53.567 20:15:51 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1544896 00:20:53.567 20:15:51 -- common/autotest_common.sh@628 -- # local arg=wait 00:20:53.567 20:15:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:53.567 20:15:51 -- common/autotest_common.sh@632 -- # type -t wait 00:20:53.567 20:15:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:53.567 20:15:51 -- common/autotest_common.sh@643 -- # wait 1544896 00:20:53.567 20:15:51 -- common/autotest_common.sh@643 -- # es=1 00:20:53.567 20:15:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:53.567 20:15:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:53.567 20:15:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:53.567 20:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:53.567 20:15:51 -- common/autotest_common.sh@10 -- # set +x 00:20:53.567 20:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.567 20:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:53.567 20:15:51 -- common/autotest_common.sh@10 -- # set +x 00:20:53.567 [2024-04-25 20:15:51.285074] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.567 20:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:53.567 20:15:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:53.567 20:15:51 -- common/autotest_common.sh@10 -- # set +x 00:20:53.567 20:15:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@54 -- # perf_pid=1545501 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@56 -- # delay=0 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@57 -- # kill -0 1545501 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:53.567 20:15:51 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:53.567 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.567 [2024-04-25 20:15:51.381375] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:54.135 20:15:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:54.135 20:15:51 -- target/delete_subsystem.sh@57 -- # kill -0 1545501 00:20:54.135 20:15:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:54.395 20:15:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:54.395 20:15:52 -- target/delete_subsystem.sh@57 -- # kill -0 1545501 00:20:54.395 20:15:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:54.962 20:15:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:54.962 20:15:52 -- target/delete_subsystem.sh@57 -- # kill -0 1545501 00:20:54.962 20:15:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:55.528 20:15:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:55.528 20:15:53 -- target/delete_subsystem.sh@57 -- # kill -0 1545501 00:20:55.528 20:15:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:56.097 20:15:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:56.097 20:15:53 -- target/delete_subsystem.sh@57 -- # kill -0 1545501 00:20:56.097 20:15:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:56.662 20:15:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:56.662 20:15:54 -- target/delete_subsystem.sh@57 -- # kill -0 1545501 00:20:56.662 20:15:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:56.662 Initializing NVMe Controllers 00:20:56.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.662 Controller IO queue size 128, less than required. 00:20:56.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:56.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:56.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:56.662 Initialization complete. Launching workers. 00:20:56.662 ======================================================== 00:20:56.662 Latency(us) 00:20:56.662 Device Information : IOPS MiB/s Average min max 00:20:56.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004085.17 1000136.68 1042662.44 00:20:56.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003976.68 1000141.25 1011676.64 00:20:56.662 ======================================================== 00:20:56.662 Total : 256.00 0.12 1004030.92 1000136.68 1042662.44 00:20:56.662 00:20:56.918 20:15:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:56.918 20:15:54 -- target/delete_subsystem.sh@57 -- # kill -0 1545501 00:20:56.918 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1545501) - No such process 00:20:56.918 20:15:54 -- target/delete_subsystem.sh@67 -- # wait 1545501 00:20:56.918 20:15:54 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:56.918 20:15:54 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:20:56.918 20:15:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:56.918 20:15:54 -- nvmf/common.sh@116 -- # sync 00:20:56.918 20:15:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:56.918 20:15:54 -- nvmf/common.sh@119 -- # set +e 00:20:56.919 20:15:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:56.919 20:15:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:56.919 rmmod nvme_tcp 00:20:57.178 rmmod nvme_fabrics 00:20:57.178 rmmod nvme_keyring 00:20:57.178 20:15:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:57.178 20:15:54 -- nvmf/common.sh@123 -- # set -e 00:20:57.178 20:15:54 -- nvmf/common.sh@124 -- # return 0 00:20:57.178 20:15:54 -- nvmf/common.sh@477 -- # '[' -n 1544583 ']' 00:20:57.178 20:15:54 -- nvmf/common.sh@478 -- # killprocess 1544583 00:20:57.178 20:15:54 -- common/autotest_common.sh@926 -- # '[' -z 1544583 ']' 00:20:57.178 20:15:54 -- common/autotest_common.sh@930 -- # kill -0 1544583 00:20:57.178 20:15:54 -- common/autotest_common.sh@931 -- # uname 00:20:57.178 20:15:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:57.178 20:15:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1544583 00:20:57.178 20:15:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:57.178 20:15:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:57.178 20:15:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1544583' 00:20:57.178 killing process with pid 1544583 00:20:57.178 20:15:54 -- common/autotest_common.sh@945 -- # kill 1544583 00:20:57.178 20:15:54 -- common/autotest_common.sh@950 -- # wait 1544583 00:20:57.747 20:15:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:57.747 20:15:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:57.747 20:15:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:57.747 20:15:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.747 20:15:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:57.747 20:15:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.747 20:15:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.747 20:15:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.649 20:15:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:59.649 00:20:59.649 real 0m16.643s 00:20:59.649 user 0m30.970s 00:20:59.649 sys 0m4.951s 00:20:59.649 20:15:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.649 20:15:57 -- common/autotest_common.sh@10 -- # set +x 00:20:59.649 ************************************ 00:20:59.649 END TEST nvmf_delete_subsystem 00:20:59.649 ************************************ 00:20:59.649 20:15:57 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:20:59.649 20:15:57 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:20:59.649 20:15:57 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:59.649 20:15:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:59.649 20:15:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:59.649 20:15:57 -- common/autotest_common.sh@10 -- # set +x 00:20:59.649 ************************************ 00:20:59.649 START TEST nvmf_host_management 00:20:59.649 ************************************ 00:20:59.649 20:15:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:59.649 * Looking for test storage... 00:20:59.910 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:59.910 20:15:57 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.910 20:15:57 -- nvmf/common.sh@7 -- # uname -s 00:20:59.910 20:15:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.910 20:15:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.910 20:15:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.910 20:15:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.910 20:15:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.910 20:15:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.910 20:15:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.910 20:15:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.910 20:15:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.910 20:15:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.910 20:15:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:59.910 20:15:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:59.910 20:15:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.910 20:15:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.910 20:15:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:59.910 20:15:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:59.910 20:15:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.910 20:15:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.910 20:15:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.910 20:15:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.910 20:15:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.910 20:15:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.910 20:15:57 -- paths/export.sh@5 -- # export PATH 00:20:59.910 20:15:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.910 20:15:57 -- nvmf/common.sh@46 -- # : 0 00:20:59.910 20:15:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:59.910 20:15:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:59.910 20:15:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:59.910 20:15:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.910 20:15:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.910 20:15:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:59.910 20:15:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:59.910 20:15:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:59.910 20:15:57 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:59.910 20:15:57 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:59.910 20:15:57 -- target/host_management.sh@104 -- # nvmftestinit 00:20:59.910 20:15:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:59.910 20:15:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.910 20:15:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:59.910 20:15:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:59.910 20:15:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:59.910 20:15:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.910 20:15:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.910 20:15:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.910 20:15:57 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:59.910 20:15:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:59.910 20:15:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:59.910 20:15:57 -- common/autotest_common.sh@10 -- # set +x 00:21:05.224 20:16:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:05.224 20:16:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:05.224 20:16:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:05.224 20:16:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:05.224 20:16:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:05.224 20:16:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:05.224 20:16:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:05.224 20:16:03 -- nvmf/common.sh@294 -- # net_devs=() 00:21:05.224 20:16:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:05.224 20:16:03 -- nvmf/common.sh@295 -- # e810=() 00:21:05.224 20:16:03 -- nvmf/common.sh@295 -- # local -ga e810 00:21:05.224 20:16:03 -- nvmf/common.sh@296 -- # x722=() 00:21:05.224 20:16:03 -- nvmf/common.sh@296 -- # local -ga x722 00:21:05.224 20:16:03 -- nvmf/common.sh@297 -- # mlx=() 00:21:05.224 20:16:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:05.224 20:16:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.224 20:16:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:05.224 20:16:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:05.224 20:16:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:05.224 20:16:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:05.224 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:05.224 20:16:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:05.224 20:16:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:05.224 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:05.224 20:16:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:05.224 20:16:03 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:05.224 20:16:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:05.224 20:16:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.224 20:16:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:05.224 20:16:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.224 20:16:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:05.224 Found net devices under 0000:27:00.0: cvl_0_0 00:21:05.224 20:16:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.224 20:16:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:05.224 20:16:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.225 20:16:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:05.225 20:16:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.225 20:16:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:05.225 Found net devices under 0000:27:00.1: cvl_0_1 00:21:05.225 20:16:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.225 20:16:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:05.225 20:16:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:05.225 20:16:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:05.225 20:16:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:05.225 20:16:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:05.225 20:16:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.225 20:16:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.225 20:16:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.225 20:16:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:05.225 20:16:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.225 20:16:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.225 20:16:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:05.225 20:16:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.225 20:16:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.225 20:16:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:05.225 20:16:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:05.225 20:16:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.225 20:16:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.484 20:16:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.484 20:16:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.484 20:16:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:05.484 20:16:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.484 20:16:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.484 20:16:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.484 20:16:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:05.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:21:05.484 00:21:05.484 --- 10.0.0.2 ping statistics --- 00:21:05.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.484 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:21:05.484 20:16:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:21:05.484 00:21:05.484 --- 10.0.0.1 ping statistics --- 00:21:05.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.484 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:21:05.484 20:16:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.484 20:16:03 -- nvmf/common.sh@410 -- # return 0 00:21:05.484 20:16:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:05.484 20:16:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.484 20:16:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:05.484 20:16:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:05.484 20:16:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.484 20:16:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:05.484 20:16:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:05.484 20:16:03 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:21:05.484 20:16:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:05.484 20:16:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:05.484 20:16:03 -- common/autotest_common.sh@10 -- # set +x 00:21:05.484 ************************************ 00:21:05.484 START TEST nvmf_host_management 00:21:05.484 ************************************ 00:21:05.484 20:16:03 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:21:05.484 20:16:03 -- target/host_management.sh@69 -- # starttarget 00:21:05.484 20:16:03 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:21:05.484 20:16:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:05.484 20:16:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:05.484 20:16:03 -- common/autotest_common.sh@10 -- # set +x 00:21:05.484 20:16:03 -- nvmf/common.sh@469 -- # nvmfpid=1550303 00:21:05.484 20:16:03 -- nvmf/common.sh@470 -- # waitforlisten 1550303 00:21:05.484 20:16:03 -- common/autotest_common.sh@819 -- # '[' -z 1550303 ']' 00:21:05.484 20:16:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.484 20:16:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:05.484 20:16:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.484 20:16:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:05.484 20:16:03 -- common/autotest_common.sh@10 -- # set +x 00:21:05.484 20:16:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:05.742 [2024-04-25 20:16:03.430317] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:05.742 [2024-04-25 20:16:03.430444] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.742 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.742 [2024-04-25 20:16:03.570479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.999 [2024-04-25 20:16:03.680913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:05.999 [2024-04-25 20:16:03.681094] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.999 [2024-04-25 20:16:03.681110] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.999 [2024-04-25 20:16:03.681119] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.999 [2024-04-25 20:16:03.681196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.999 [2024-04-25 20:16:03.681312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.999 [2024-04-25 20:16:03.681438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.999 [2024-04-25 20:16:03.681467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:06.258 20:16:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:06.258 20:16:04 -- common/autotest_common.sh@852 -- # return 0 00:21:06.258 20:16:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:06.258 20:16:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:06.258 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.258 20:16:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.258 20:16:04 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.258 20:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.258 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.258 [2024-04-25 20:16:04.181715] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.519 20:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.519 20:16:04 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:21:06.519 20:16:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:06.519 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.519 20:16:04 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.519 20:16:04 -- target/host_management.sh@23 -- # cat 00:21:06.519 20:16:04 -- target/host_management.sh@30 -- # rpc_cmd 00:21:06.519 20:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.519 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.519 Malloc0 00:21:06.519 [2024-04-25 20:16:04.260330] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.519 20:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.519 20:16:04 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:21:06.519 20:16:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:06.519 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.519 20:16:04 -- target/host_management.sh@73 -- # perfpid=1550632 00:21:06.519 20:16:04 -- target/host_management.sh@74 -- # waitforlisten 1550632 /var/tmp/bdevperf.sock 00:21:06.519 20:16:04 -- common/autotest_common.sh@819 -- # '[' -z 1550632 ']' 00:21:06.519 20:16:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.519 20:16:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:06.519 20:16:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.519 20:16:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:06.519 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.519 20:16:04 -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:06.519 20:16:04 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:21:06.519 20:16:04 -- nvmf/common.sh@520 -- # config=() 00:21:06.519 20:16:04 -- nvmf/common.sh@520 -- # local subsystem config 00:21:06.519 20:16:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:06.519 20:16:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:06.519 { 00:21:06.519 "params": { 00:21:06.519 "name": "Nvme$subsystem", 00:21:06.519 "trtype": "$TEST_TRANSPORT", 00:21:06.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.519 "adrfam": "ipv4", 00:21:06.519 "trsvcid": "$NVMF_PORT", 00:21:06.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.519 "hdgst": ${hdgst:-false}, 00:21:06.519 "ddgst": ${ddgst:-false} 00:21:06.519 }, 00:21:06.519 "method": "bdev_nvme_attach_controller" 00:21:06.519 } 00:21:06.519 EOF 00:21:06.519 )") 00:21:06.519 20:16:04 -- nvmf/common.sh@542 -- # cat 00:21:06.519 20:16:04 -- nvmf/common.sh@544 -- # jq . 00:21:06.519 20:16:04 -- nvmf/common.sh@545 -- # IFS=, 00:21:06.519 20:16:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:06.519 "params": { 00:21:06.519 "name": "Nvme0", 00:21:06.519 "trtype": "tcp", 00:21:06.519 "traddr": "10.0.0.2", 00:21:06.519 "adrfam": "ipv4", 00:21:06.519 "trsvcid": "4420", 00:21:06.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:06.519 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:06.519 "hdgst": false, 00:21:06.519 "ddgst": false 00:21:06.519 }, 00:21:06.519 "method": "bdev_nvme_attach_controller" 00:21:06.519 }' 00:21:06.519 [2024-04-25 20:16:04.385243] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:06.519 [2024-04-25 20:16:04.385384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550632 ] 00:21:06.780 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.780 [2024-04-25 20:16:04.517428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.780 [2024-04-25 20:16:04.612896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.039 Running I/O for 10 seconds... 00:21:07.299 20:16:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:07.299 20:16:05 -- common/autotest_common.sh@852 -- # return 0 00:21:07.299 20:16:05 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:07.299 20:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.299 20:16:05 -- common/autotest_common.sh@10 -- # set +x 00:21:07.299 20:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.299 20:16:05 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.299 20:16:05 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:21:07.299 20:16:05 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:07.299 20:16:05 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:21:07.299 20:16:05 -- target/host_management.sh@52 -- # local ret=1 00:21:07.299 20:16:05 -- target/host_management.sh@53 -- # local i 00:21:07.299 20:16:05 -- target/host_management.sh@54 -- # (( i = 10 )) 00:21:07.299 20:16:05 -- target/host_management.sh@54 -- # (( i != 0 )) 00:21:07.299 20:16:05 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:21:07.299 20:16:05 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.299 20:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.299 20:16:05 -- common/autotest_common.sh@10 -- # set +x 00:21:07.299 20:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.299 20:16:05 -- target/host_management.sh@55 -- # read_io_count=900 00:21:07.299 20:16:05 -- target/host_management.sh@58 -- # '[' 900 -ge 100 ']' 00:21:07.299 20:16:05 -- target/host_management.sh@59 -- # ret=0 00:21:07.299 20:16:05 -- target/host_management.sh@60 -- # break 00:21:07.299 20:16:05 -- target/host_management.sh@64 -- # return 0 00:21:07.299 20:16:05 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:07.299 20:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.299 20:16:05 -- common/autotest_common.sh@10 -- # set +x 00:21:07.299 [2024-04-25 20:16:05.170043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.299 [2024-04-25 20:16:05.170102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.170446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.300 [2024-04-25 20:16:05.171005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.300 [2024-04-25 20:16:05.171422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.300 [2024-04-25 20:16:05.171435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.171987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.171996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.301 [2024-04-25 20:16:05.172198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.301 [2024-04-25 20:16:05.172206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.302 [2024-04-25 20:16:05.172216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.302 [2024-04-25 20:16:05.172225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.302 [2024-04-25 20:16:05.172236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.302 [2024-04-25 20:16:05.172244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.302 [2024-04-25 20:16:05.172254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.302 [2024-04-25 20:16:05.172262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.302 [2024-04-25 20:16:05.172271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.302 [2024-04-25 20:16:05.172279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.302 [2024-04-25 20:16:05.172289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.302 [2024-04-25 20:16:05.172296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.302 [2024-04-25 20:16:05.172307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.302 [2024-04-25 20:16:05.172315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.302 [2024-04-25 20:16:05.172489] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000003d80 was disconnected and freed. reset controller. 00:21:07.302 [2024-04-25 20:16:05.173421] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:07.302 20:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.302 task offset: 5376 on job bdev=Nvme0n1 fails 00:21:07.302 00:21:07.302 Latency(us) 00:21:07.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.302 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.302 Job: Nvme0n1 ended in about 0.25 seconds with error 00:21:07.302 Verification LBA range: start 0x0 length 0x400 00:21:07.302 Nvme0n1 : 0.25 4240.63 265.04 259.96 0.00 13918.19 1810.86 20833.55 00:21:07.302 =================================================================================================================== 00:21:07.302 Total : 4240.63 265.04 259.96 0.00 13918.19 1810.86 20833.55 00:21:07.302 20:16:05 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:07.302 20:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.302 20:16:05 -- common/autotest_common.sh@10 -- # set +x 00:21:07.302 [2024-04-25 20:16:05.176163] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:07.302 [2024-04-25 20:16:05.176206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:21:07.302 [2024-04-25 20:16:05.181365] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:21:07.302 [2024-04-25 20:16:05.181673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:07.302 [2024-04-25 20:16:05.181708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.302 [2024-04-25 20:16:05.181731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:21:07.302 [2024-04-25 20:16:05.181743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:21:07.302 [2024-04-25 20:16:05.181755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:07.302 [2024-04-25 20:16:05.181764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x613000003140 00:21:07.302 [2024-04-25 20:16:05.181790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:21:07.302 [2024-04-25 20:16:05.181806] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:07.302 [2024-04-25 20:16:05.181819] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:07.302 [2024-04-25 20:16:05.181832] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:07.302 [2024-04-25 20:16:05.181853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.302 20:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.302 20:16:05 -- target/host_management.sh@87 -- # sleep 1 00:21:08.680 20:16:06 -- target/host_management.sh@91 -- # kill -9 1550632 00:21:08.680 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1550632) - No such process 00:21:08.680 20:16:06 -- target/host_management.sh@91 -- # true 00:21:08.680 20:16:06 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:21:08.680 20:16:06 -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:08.680 20:16:06 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:21:08.680 20:16:06 -- nvmf/common.sh@520 -- # config=() 00:21:08.680 20:16:06 -- nvmf/common.sh@520 -- # local subsystem config 00:21:08.680 20:16:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:08.680 20:16:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:08.680 { 00:21:08.680 "params": { 00:21:08.680 "name": "Nvme$subsystem", 00:21:08.680 "trtype": "$TEST_TRANSPORT", 00:21:08.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.680 "adrfam": "ipv4", 00:21:08.680 "trsvcid": "$NVMF_PORT", 00:21:08.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.680 "hdgst": ${hdgst:-false}, 00:21:08.680 "ddgst": ${ddgst:-false} 00:21:08.680 }, 00:21:08.680 "method": "bdev_nvme_attach_controller" 00:21:08.680 } 00:21:08.680 EOF 00:21:08.680 )") 00:21:08.680 20:16:06 -- nvmf/common.sh@542 -- # cat 00:21:08.680 20:16:06 -- nvmf/common.sh@544 -- # jq . 00:21:08.680 20:16:06 -- nvmf/common.sh@545 -- # IFS=, 00:21:08.680 20:16:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:08.680 "params": { 00:21:08.680 "name": "Nvme0", 00:21:08.680 "trtype": "tcp", 00:21:08.680 "traddr": "10.0.0.2", 00:21:08.680 "adrfam": "ipv4", 00:21:08.680 "trsvcid": "4420", 00:21:08.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:08.680 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:08.680 "hdgst": false, 00:21:08.680 "ddgst": false 00:21:08.680 }, 00:21:08.680 "method": "bdev_nvme_attach_controller" 00:21:08.680 }' 00:21:08.680 [2024-04-25 20:16:06.271534] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:08.680 [2024-04-25 20:16:06.271675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550953 ] 00:21:08.680 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.680 [2024-04-25 20:16:06.402269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.680 [2024-04-25 20:16:06.498328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.248 Running I/O for 1 seconds... 00:21:10.184 00:21:10.184 Latency(us) 00:21:10.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.184 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.184 Verification LBA range: start 0x0 length 0x400 00:21:10.184 Nvme0n1 : 1.01 4735.54 295.97 0.00 0.00 13329.23 1776.37 21799.34 00:21:10.184 =================================================================================================================== 00:21:10.184 Total : 4735.54 295.97 0.00 0.00 13329.23 1776.37 21799.34 00:21:10.443 20:16:08 -- target/host_management.sh@101 -- # stoptarget 00:21:10.443 20:16:08 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:21:10.443 20:16:08 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:10.443 20:16:08 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:10.443 20:16:08 -- target/host_management.sh@40 -- # nvmftestfini 00:21:10.443 20:16:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:10.443 20:16:08 -- nvmf/common.sh@116 -- # sync 00:21:10.443 20:16:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:10.443 20:16:08 -- nvmf/common.sh@119 -- # set +e 00:21:10.443 20:16:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:10.443 20:16:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:10.443 rmmod nvme_tcp 00:21:10.443 rmmod nvme_fabrics 00:21:10.443 rmmod nvme_keyring 00:21:10.701 20:16:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:10.701 20:16:08 -- nvmf/common.sh@123 -- # set -e 00:21:10.701 20:16:08 -- nvmf/common.sh@124 -- # return 0 00:21:10.701 20:16:08 -- nvmf/common.sh@477 -- # '[' -n 1550303 ']' 00:21:10.701 20:16:08 -- nvmf/common.sh@478 -- # killprocess 1550303 00:21:10.701 20:16:08 -- common/autotest_common.sh@926 -- # '[' -z 1550303 ']' 00:21:10.701 20:16:08 -- common/autotest_common.sh@930 -- # kill -0 1550303 00:21:10.701 20:16:08 -- common/autotest_common.sh@931 -- # uname 00:21:10.701 20:16:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:10.701 20:16:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1550303 00:21:10.701 20:16:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:10.701 20:16:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:10.701 20:16:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1550303' 00:21:10.701 killing process with pid 1550303 00:21:10.701 20:16:08 -- common/autotest_common.sh@945 -- # kill 1550303 00:21:10.701 20:16:08 -- common/autotest_common.sh@950 -- # wait 1550303 00:21:11.269 [2024-04-25 20:16:08.904553] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:21:11.269 20:16:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:11.269 20:16:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:11.269 20:16:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:11.269 20:16:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.269 20:16:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:11.269 20:16:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.269 20:16:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.269 20:16:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.178 20:16:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:13.178 00:21:13.178 real 0m7.673s 00:21:13.178 user 0m23.522s 00:21:13.178 sys 0m1.367s 00:21:13.178 20:16:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.178 20:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:13.178 ************************************ 00:21:13.178 END TEST nvmf_host_management 00:21:13.178 ************************************ 00:21:13.178 20:16:11 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:13.178 00:21:13.178 real 0m13.538s 00:21:13.178 user 0m25.057s 00:21:13.178 sys 0m5.614s 00:21:13.178 20:16:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.178 20:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:13.178 ************************************ 00:21:13.178 END TEST nvmf_host_management 00:21:13.178 ************************************ 00:21:13.178 20:16:11 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:13.178 20:16:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:13.178 20:16:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:13.178 20:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:13.178 ************************************ 00:21:13.178 START TEST nvmf_lvol 00:21:13.178 ************************************ 00:21:13.178 20:16:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:13.437 * Looking for test storage... 00:21:13.437 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:13.437 20:16:11 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.437 20:16:11 -- nvmf/common.sh@7 -- # uname -s 00:21:13.437 20:16:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.437 20:16:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.437 20:16:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.437 20:16:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.437 20:16:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.437 20:16:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.437 20:16:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.437 20:16:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.437 20:16:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.437 20:16:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.437 20:16:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:13.437 20:16:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:13.437 20:16:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.437 20:16:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.437 20:16:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:13.437 20:16:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:13.437 20:16:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.437 20:16:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.437 20:16:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.437 20:16:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.437 20:16:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.437 20:16:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.437 20:16:11 -- paths/export.sh@5 -- # export PATH 00:21:13.437 20:16:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.437 20:16:11 -- nvmf/common.sh@46 -- # : 0 00:21:13.437 20:16:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:13.437 20:16:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:13.437 20:16:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:13.437 20:16:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.437 20:16:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.437 20:16:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:13.437 20:16:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:13.437 20:16:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:13.437 20:16:11 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:13.437 20:16:11 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:13.437 20:16:11 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:21:13.437 20:16:11 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:21:13.437 20:16:11 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:13.437 20:16:11 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:21:13.437 20:16:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:13.437 20:16:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.437 20:16:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:13.437 20:16:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:13.437 20:16:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:13.437 20:16:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.437 20:16:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.437 20:16:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.437 20:16:11 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:13.437 20:16:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:13.437 20:16:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:13.437 20:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:18.726 20:16:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:18.726 20:16:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:18.726 20:16:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:18.726 20:16:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:18.726 20:16:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:18.726 20:16:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:18.726 20:16:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:18.726 20:16:16 -- nvmf/common.sh@294 -- # net_devs=() 00:21:18.726 20:16:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:18.726 20:16:16 -- nvmf/common.sh@295 -- # e810=() 00:21:18.726 20:16:16 -- nvmf/common.sh@295 -- # local -ga e810 00:21:18.726 20:16:16 -- nvmf/common.sh@296 -- # x722=() 00:21:18.726 20:16:16 -- nvmf/common.sh@296 -- # local -ga x722 00:21:18.726 20:16:16 -- nvmf/common.sh@297 -- # mlx=() 00:21:18.726 20:16:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:18.726 20:16:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.726 20:16:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:18.726 20:16:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:18.726 20:16:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:18.726 20:16:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:18.726 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:18.726 20:16:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:18.726 20:16:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:18.726 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:18.726 20:16:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:18.726 20:16:16 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:18.726 20:16:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:18.726 20:16:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.726 20:16:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:18.726 20:16:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.726 20:16:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:18.726 Found net devices under 0000:27:00.0: cvl_0_0 00:21:18.726 20:16:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.726 20:16:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:18.726 20:16:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.726 20:16:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:18.727 20:16:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.727 20:16:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:18.727 Found net devices under 0000:27:00.1: cvl_0_1 00:21:18.727 20:16:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.727 20:16:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:18.727 20:16:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:18.727 20:16:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:18.727 20:16:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:18.727 20:16:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:18.727 20:16:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.727 20:16:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.727 20:16:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.727 20:16:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:18.727 20:16:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.727 20:16:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.727 20:16:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:18.727 20:16:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.727 20:16:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.727 20:16:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:18.727 20:16:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:18.727 20:16:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.727 20:16:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.727 20:16:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.727 20:16:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.727 20:16:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:18.727 20:16:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.727 20:16:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.727 20:16:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.727 20:16:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:18.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:21:18.727 00:21:18.727 --- 10.0.0.2 ping statistics --- 00:21:18.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.727 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:21:18.727 20:16:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:21:18.727 00:21:18.727 --- 10.0.0.1 ping statistics --- 00:21:18.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.727 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:21:18.727 20:16:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.727 20:16:16 -- nvmf/common.sh@410 -- # return 0 00:21:18.727 20:16:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:18.727 20:16:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.727 20:16:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:18.727 20:16:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:18.727 20:16:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.727 20:16:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:18.727 20:16:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:18.727 20:16:16 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:21:18.727 20:16:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:18.727 20:16:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:18.727 20:16:16 -- common/autotest_common.sh@10 -- # set +x 00:21:18.727 20:16:16 -- nvmf/common.sh@469 -- # nvmfpid=1555454 00:21:18.727 20:16:16 -- nvmf/common.sh@470 -- # waitforlisten 1555454 00:21:18.727 20:16:16 -- common/autotest_common.sh@819 -- # '[' -z 1555454 ']' 00:21:18.727 20:16:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.727 20:16:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:18.727 20:16:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.727 20:16:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:18.727 20:16:16 -- common/autotest_common.sh@10 -- # set +x 00:21:18.727 20:16:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:21:18.727 [2024-04-25 20:16:16.391746] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:18.727 [2024-04-25 20:16:16.391847] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.727 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.727 [2024-04-25 20:16:16.511020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:18.727 [2024-04-25 20:16:16.603860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:18.727 [2024-04-25 20:16:16.604019] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.727 [2024-04-25 20:16:16.604031] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.727 [2024-04-25 20:16:16.604041] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.727 [2024-04-25 20:16:16.604117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.727 [2024-04-25 20:16:16.604214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.727 [2024-04-25 20:16:16.604219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.295 20:16:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:19.295 20:16:17 -- common/autotest_common.sh@852 -- # return 0 00:21:19.295 20:16:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:19.295 20:16:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:19.295 20:16:17 -- common/autotest_common.sh@10 -- # set +x 00:21:19.295 20:16:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.295 20:16:17 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:19.556 [2024-04-25 20:16:17.256557] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.556 20:16:17 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:19.556 20:16:17 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:21:19.556 20:16:17 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:19.816 20:16:17 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:21:19.816 20:16:17 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:21:20.074 20:16:17 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:21:20.074 20:16:17 -- target/nvmf_lvol.sh@29 -- # lvs=9381186c-2218-41c5-aa25-3b9c49792c9f 00:21:20.074 20:16:17 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9381186c-2218-41c5-aa25-3b9c49792c9f lvol 20 00:21:20.332 20:16:18 -- target/nvmf_lvol.sh@32 -- # lvol=34d9f71c-6012-40b6-a9c3-71c530ffd323 00:21:20.332 20:16:18 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:20.332 20:16:18 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34d9f71c-6012-40b6-a9c3-71c530ffd323 00:21:20.590 20:16:18 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:20.590 [2024-04-25 20:16:18.491570] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.590 20:16:18 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:20.850 20:16:18 -- target/nvmf_lvol.sh@42 -- # perf_pid=1555795 00:21:20.850 20:16:18 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:21:20.850 20:16:18 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:21:20.850 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.788 20:16:19 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 34d9f71c-6012-40b6-a9c3-71c530ffd323 MY_SNAPSHOT 00:21:22.048 20:16:19 -- target/nvmf_lvol.sh@47 -- # snapshot=e089ca57-0103-437f-a170-49a5ce89d5ce 00:21:22.048 20:16:19 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 34d9f71c-6012-40b6-a9c3-71c530ffd323 30 00:21:22.306 20:16:19 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e089ca57-0103-437f-a170-49a5ce89d5ce MY_CLONE 00:21:22.306 20:16:20 -- target/nvmf_lvol.sh@49 -- # clone=9119bbad-2540-46a2-aa93-f6f8bee41060 00:21:22.306 20:16:20 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9119bbad-2540-46a2-aa93-f6f8bee41060 00:21:22.874 20:16:20 -- target/nvmf_lvol.sh@53 -- # wait 1555795 00:21:32.929 Initializing NVMe Controllers 00:21:32.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:21:32.929 Controller IO queue size 128, less than required. 00:21:32.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:32.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:21:32.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:21:32.930 Initialization complete. Launching workers. 00:21:32.930 ======================================================== 00:21:32.930 Latency(us) 00:21:32.930 Device Information : IOPS MiB/s Average min max 00:21:32.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13701.80 53.52 9344.42 1479.70 83081.70 00:21:32.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13631.60 53.25 9391.09 3473.25 62891.25 00:21:32.930 ======================================================== 00:21:32.930 Total : 27333.40 106.77 9367.70 1479.70 83081.70 00:21:32.930 00:21:32.930 20:16:29 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:32.930 20:16:29 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 34d9f71c-6012-40b6-a9c3-71c530ffd323 00:21:32.930 20:16:29 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9381186c-2218-41c5-aa25-3b9c49792c9f 00:21:32.930 20:16:29 -- target/nvmf_lvol.sh@60 -- # rm -f 00:21:32.930 20:16:29 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:21:32.930 20:16:29 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:21:32.930 20:16:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:32.930 20:16:29 -- nvmf/common.sh@116 -- # sync 00:21:32.930 20:16:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:32.930 20:16:29 -- nvmf/common.sh@119 -- # set +e 00:21:32.930 20:16:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:32.930 20:16:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:32.930 rmmod nvme_tcp 00:21:32.930 rmmod nvme_fabrics 00:21:32.930 rmmod nvme_keyring 00:21:32.930 20:16:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:32.930 20:16:29 -- nvmf/common.sh@123 -- # set -e 00:21:32.930 20:16:29 -- nvmf/common.sh@124 -- # return 0 00:21:32.930 20:16:29 -- nvmf/common.sh@477 -- # '[' -n 1555454 ']' 00:21:32.930 20:16:29 -- nvmf/common.sh@478 -- # killprocess 1555454 00:21:32.930 20:16:29 -- common/autotest_common.sh@926 -- # '[' -z 1555454 ']' 00:21:32.930 20:16:29 -- common/autotest_common.sh@930 -- # kill -0 1555454 00:21:32.930 20:16:29 -- common/autotest_common.sh@931 -- # uname 00:21:32.930 20:16:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:32.930 20:16:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1555454 00:21:32.930 20:16:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:32.930 20:16:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:32.930 20:16:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1555454' 00:21:32.930 killing process with pid 1555454 00:21:32.930 20:16:29 -- common/autotest_common.sh@945 -- # kill 1555454 00:21:32.930 20:16:29 -- common/autotest_common.sh@950 -- # wait 1555454 00:21:32.930 20:16:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:32.930 20:16:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:32.930 20:16:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:32.930 20:16:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.930 20:16:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:32.930 20:16:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.930 20:16:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.930 20:16:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.308 20:16:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:34.308 00:21:34.308 real 0m21.130s 00:21:34.308 user 1m2.507s 00:21:34.308 sys 0m6.029s 00:21:34.308 20:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.308 20:16:32 -- common/autotest_common.sh@10 -- # set +x 00:21:34.308 ************************************ 00:21:34.308 END TEST nvmf_lvol 00:21:34.308 ************************************ 00:21:34.569 20:16:32 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:34.569 20:16:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:34.569 20:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:34.569 20:16:32 -- common/autotest_common.sh@10 -- # set +x 00:21:34.569 ************************************ 00:21:34.569 START TEST nvmf_lvs_grow 00:21:34.569 ************************************ 00:21:34.569 20:16:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:34.569 * Looking for test storage... 00:21:34.569 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:34.569 20:16:32 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.569 20:16:32 -- nvmf/common.sh@7 -- # uname -s 00:21:34.569 20:16:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.569 20:16:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.569 20:16:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.569 20:16:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.569 20:16:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.569 20:16:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.569 20:16:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.569 20:16:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.569 20:16:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.569 20:16:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.569 20:16:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:34.569 20:16:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:34.569 20:16:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.569 20:16:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.569 20:16:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:34.569 20:16:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:34.569 20:16:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.569 20:16:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.569 20:16:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.569 20:16:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.569 20:16:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.569 20:16:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.569 20:16:32 -- paths/export.sh@5 -- # export PATH 00:21:34.569 20:16:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.569 20:16:32 -- nvmf/common.sh@46 -- # : 0 00:21:34.569 20:16:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:34.569 20:16:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:34.569 20:16:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:34.569 20:16:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.569 20:16:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.569 20:16:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:34.569 20:16:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:34.569 20:16:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:34.569 20:16:32 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:34.569 20:16:32 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.569 20:16:32 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:21:34.569 20:16:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:34.569 20:16:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.569 20:16:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:34.569 20:16:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:34.569 20:16:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:34.569 20:16:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.569 20:16:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.569 20:16:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.569 20:16:32 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:34.569 20:16:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:34.569 20:16:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:34.569 20:16:32 -- common/autotest_common.sh@10 -- # set +x 00:21:39.862 20:16:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:39.862 20:16:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:39.862 20:16:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:39.862 20:16:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:39.862 20:16:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:39.862 20:16:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:39.862 20:16:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:39.862 20:16:37 -- nvmf/common.sh@294 -- # net_devs=() 00:21:39.862 20:16:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:39.862 20:16:37 -- nvmf/common.sh@295 -- # e810=() 00:21:39.862 20:16:37 -- nvmf/common.sh@295 -- # local -ga e810 00:21:39.862 20:16:37 -- nvmf/common.sh@296 -- # x722=() 00:21:39.862 20:16:37 -- nvmf/common.sh@296 -- # local -ga x722 00:21:39.862 20:16:37 -- nvmf/common.sh@297 -- # mlx=() 00:21:39.862 20:16:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:39.862 20:16:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.862 20:16:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:39.862 20:16:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:39.862 20:16:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:39.862 20:16:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:39.862 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:39.862 20:16:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:39.862 20:16:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:39.862 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:39.862 20:16:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:39.862 20:16:37 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:39.862 20:16:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.862 20:16:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:39.862 20:16:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.862 20:16:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:39.862 Found net devices under 0000:27:00.0: cvl_0_0 00:21:39.862 20:16:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.862 20:16:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:39.862 20:16:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.862 20:16:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:39.862 20:16:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.862 20:16:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:39.862 Found net devices under 0000:27:00.1: cvl_0_1 00:21:39.862 20:16:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.862 20:16:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:39.862 20:16:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:39.862 20:16:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:39.862 20:16:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:39.862 20:16:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.862 20:16:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.862 20:16:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.862 20:16:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:39.862 20:16:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.862 20:16:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.862 20:16:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:39.862 20:16:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.862 20:16:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.862 20:16:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:39.862 20:16:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:39.862 20:16:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.862 20:16:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.862 20:16:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.862 20:16:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.862 20:16:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:39.862 20:16:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.863 20:16:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.863 20:16:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.863 20:16:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:39.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:21:39.863 00:21:39.863 --- 10.0.0.2 ping statistics --- 00:21:39.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.863 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:21:39.863 20:16:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:21:39.863 00:21:39.863 --- 10.0.0.1 ping statistics --- 00:21:39.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.863 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:21:39.863 20:16:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.863 20:16:37 -- nvmf/common.sh@410 -- # return 0 00:21:39.863 20:16:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:39.863 20:16:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.863 20:16:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:39.863 20:16:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:39.863 20:16:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.863 20:16:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:39.863 20:16:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:39.863 20:16:37 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:21:39.863 20:16:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:39.863 20:16:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:39.863 20:16:37 -- common/autotest_common.sh@10 -- # set +x 00:21:39.863 20:16:37 -- nvmf/common.sh@469 -- # nvmfpid=1561885 00:21:39.863 20:16:37 -- nvmf/common.sh@470 -- # waitforlisten 1561885 00:21:39.863 20:16:37 -- common/autotest_common.sh@819 -- # '[' -z 1561885 ']' 00:21:39.863 20:16:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.863 20:16:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:39.863 20:16:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.863 20:16:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:39.863 20:16:37 -- common/autotest_common.sh@10 -- # set +x 00:21:39.863 20:16:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:39.863 [2024-04-25 20:16:37.638882] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:39.863 [2024-04-25 20:16:37.639006] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.863 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.863 [2024-04-25 20:16:37.766424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.124 [2024-04-25 20:16:37.863970] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:40.124 [2024-04-25 20:16:37.864141] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.124 [2024-04-25 20:16:37.864154] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.124 [2024-04-25 20:16:37.864164] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.124 [2024-04-25 20:16:37.864188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.691 20:16:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:40.691 20:16:38 -- common/autotest_common.sh@852 -- # return 0 00:21:40.691 20:16:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:40.691 20:16:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:40.691 20:16:38 -- common/autotest_common.sh@10 -- # set +x 00:21:40.691 20:16:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:40.691 [2024-04-25 20:16:38.465775] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:21:40.691 20:16:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:40.691 20:16:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:40.691 20:16:38 -- common/autotest_common.sh@10 -- # set +x 00:21:40.691 ************************************ 00:21:40.691 START TEST lvs_grow_clean 00:21:40.691 ************************************ 00:21:40.691 20:16:38 -- common/autotest_common.sh@1104 -- # lvs_grow 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:40.691 20:16:38 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:40.950 20:16:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:40.950 20:16:38 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:40.950 20:16:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=97045d38-0938-414b-9a4e-108dc6551ad9 00:21:40.950 20:16:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:40.950 20:16:38 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:41.210 20:16:38 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:41.210 20:16:38 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:41.210 20:16:38 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 97045d38-0938-414b-9a4e-108dc6551ad9 lvol 150 00:21:41.210 20:16:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9833752-403c-44e5-b328-d0f31dc0c2e9 00:21:41.210 20:16:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:41.210 20:16:39 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:41.468 [2024-04-25 20:16:39.171365] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:41.468 [2024-04-25 20:16:39.171442] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:41.468 true 00:21:41.468 20:16:39 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:41.468 20:16:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:41.468 20:16:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:41.468 20:16:39 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:41.729 20:16:39 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9833752-403c-44e5-b328-d0f31dc0c2e9 00:21:41.729 20:16:39 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:41.988 [2024-04-25 20:16:39.755773] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.988 20:16:39 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:41.988 20:16:39 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1562419 00:21:41.988 20:16:39 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.988 20:16:39 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1562419 /var/tmp/bdevperf.sock 00:21:41.988 20:16:39 -- common/autotest_common.sh@819 -- # '[' -z 1562419 ']' 00:21:41.988 20:16:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.988 20:16:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:41.988 20:16:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.988 20:16:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:41.988 20:16:39 -- common/autotest_common.sh@10 -- # set +x 00:21:41.988 20:16:39 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:42.245 [2024-04-25 20:16:39.978037] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:42.245 [2024-04-25 20:16:39.978156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1562419 ] 00:21:42.245 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.245 [2024-04-25 20:16:40.092951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.503 [2024-04-25 20:16:40.182101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.761 20:16:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:42.761 20:16:40 -- common/autotest_common.sh@852 -- # return 0 00:21:42.761 20:16:40 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:43.332 Nvme0n1 00:21:43.332 20:16:40 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:43.332 [ 00:21:43.332 { 00:21:43.332 "name": "Nvme0n1", 00:21:43.332 "aliases": [ 00:21:43.332 "e9833752-403c-44e5-b328-d0f31dc0c2e9" 00:21:43.332 ], 00:21:43.332 "product_name": "NVMe disk", 00:21:43.332 "block_size": 4096, 00:21:43.332 "num_blocks": 38912, 00:21:43.332 "uuid": "e9833752-403c-44e5-b328-d0f31dc0c2e9", 00:21:43.332 "assigned_rate_limits": { 00:21:43.332 "rw_ios_per_sec": 0, 00:21:43.332 "rw_mbytes_per_sec": 0, 00:21:43.332 "r_mbytes_per_sec": 0, 00:21:43.332 "w_mbytes_per_sec": 0 00:21:43.332 }, 00:21:43.332 "claimed": false, 00:21:43.332 "zoned": false, 00:21:43.332 "supported_io_types": { 00:21:43.332 "read": true, 00:21:43.332 "write": true, 00:21:43.332 "unmap": true, 00:21:43.332 "write_zeroes": true, 00:21:43.332 "flush": true, 00:21:43.332 "reset": true, 00:21:43.332 "compare": true, 00:21:43.332 "compare_and_write": true, 00:21:43.332 "abort": true, 00:21:43.332 "nvme_admin": true, 00:21:43.332 "nvme_io": true 00:21:43.332 }, 00:21:43.332 "driver_specific": { 00:21:43.332 "nvme": [ 00:21:43.332 { 00:21:43.332 "trid": { 00:21:43.332 "trtype": "TCP", 00:21:43.332 "adrfam": "IPv4", 00:21:43.332 "traddr": "10.0.0.2", 00:21:43.332 "trsvcid": "4420", 00:21:43.332 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:43.332 }, 00:21:43.332 "ctrlr_data": { 00:21:43.332 "cntlid": 1, 00:21:43.332 "vendor_id": "0x8086", 00:21:43.332 "model_number": "SPDK bdev Controller", 00:21:43.332 "serial_number": "SPDK0", 00:21:43.332 "firmware_revision": "24.01.1", 00:21:43.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:43.332 "oacs": { 00:21:43.332 "security": 0, 00:21:43.332 "format": 0, 00:21:43.332 "firmware": 0, 00:21:43.332 "ns_manage": 0 00:21:43.332 }, 00:21:43.332 "multi_ctrlr": true, 00:21:43.332 "ana_reporting": false 00:21:43.332 }, 00:21:43.332 "vs": { 00:21:43.332 "nvme_version": "1.3" 00:21:43.332 }, 00:21:43.332 "ns_data": { 00:21:43.332 "id": 1, 00:21:43.332 "can_share": true 00:21:43.332 } 00:21:43.332 } 00:21:43.332 ], 00:21:43.332 "mp_policy": "active_passive" 00:21:43.332 } 00:21:43.332 } 00:21:43.332 ] 00:21:43.332 20:16:41 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1562720 00:21:43.332 20:16:41 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:43.332 20:16:41 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:43.332 Running I/O for 10 seconds... 00:21:44.708 Latency(us) 00:21:44.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:44.708 Nvme0n1 : 1.00 23906.00 93.38 0.00 0.00 0.00 0.00 0.00 00:21:44.708 =================================================================================================================== 00:21:44.708 Total : 23906.00 93.38 0.00 0.00 0.00 0.00 0.00 00:21:44.708 00:21:45.280 20:16:43 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:45.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:45.280 Nvme0n1 : 2.00 24092.00 94.11 0.00 0.00 0.00 0.00 0.00 00:21:45.280 =================================================================================================================== 00:21:45.280 Total : 24092.00 94.11 0.00 0.00 0.00 0.00 0.00 00:21:45.280 00:21:45.546 true 00:21:45.546 20:16:43 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:45.546 20:16:43 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:45.546 20:16:43 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:45.546 20:16:43 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:45.546 20:16:43 -- target/nvmf_lvs_grow.sh@65 -- # wait 1562720 00:21:46.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:46.481 Nvme0n1 : 3.00 24142.33 94.31 0.00 0.00 0.00 0.00 0.00 00:21:46.481 =================================================================================================================== 00:21:46.481 Total : 24142.33 94.31 0.00 0.00 0.00 0.00 0.00 00:21:46.481 00:21:47.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:47.421 Nvme0n1 : 4.00 24218.50 94.60 0.00 0.00 0.00 0.00 0.00 00:21:47.421 =================================================================================================================== 00:21:47.421 Total : 24218.50 94.60 0.00 0.00 0.00 0.00 0.00 00:21:47.421 00:21:48.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:48.356 Nvme0n1 : 5.00 24235.40 94.67 0.00 0.00 0.00 0.00 0.00 00:21:48.356 =================================================================================================================== 00:21:48.356 Total : 24235.40 94.67 0.00 0.00 0.00 0.00 0.00 00:21:48.356 00:21:49.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:49.297 Nvme0n1 : 6.00 24243.83 94.70 0.00 0.00 0.00 0.00 0.00 00:21:49.297 =================================================================================================================== 00:21:49.297 Total : 24243.83 94.70 0.00 0.00 0.00 0.00 0.00 00:21:49.297 00:21:50.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:50.677 Nvme0n1 : 7.00 24288.29 94.88 0.00 0.00 0.00 0.00 0.00 00:21:50.677 =================================================================================================================== 00:21:50.677 Total : 24288.29 94.88 0.00 0.00 0.00 0.00 0.00 00:21:50.677 00:21:51.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:51.620 Nvme0n1 : 8.00 24273.25 94.82 0.00 0.00 0.00 0.00 0.00 00:21:51.620 =================================================================================================================== 00:21:51.620 Total : 24273.25 94.82 0.00 0.00 0.00 0.00 0.00 00:21:51.620 00:21:52.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:52.627 Nvme0n1 : 9.00 24200.11 94.53 0.00 0.00 0.00 0.00 0.00 00:21:52.627 =================================================================================================================== 00:21:52.627 Total : 24200.11 94.53 0.00 0.00 0.00 0.00 0.00 00:21:52.627 00:21:53.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:53.569 Nvme0n1 : 10.00 24186.00 94.48 0.00 0.00 0.00 0.00 0.00 00:21:53.569 =================================================================================================================== 00:21:53.569 Total : 24186.00 94.48 0.00 0.00 0.00 0.00 0.00 00:21:53.569 00:21:53.569 00:21:53.569 Latency(us) 00:21:53.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:53.569 Nvme0n1 : 10.00 24184.33 94.47 0.00 0.00 5289.98 3156.08 12831.26 00:21:53.569 =================================================================================================================== 00:21:53.569 Total : 24184.33 94.47 0.00 0.00 5289.98 3156.08 12831.26 00:21:53.569 0 00:21:53.569 20:16:51 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1562419 00:21:53.569 20:16:51 -- common/autotest_common.sh@926 -- # '[' -z 1562419 ']' 00:21:53.569 20:16:51 -- common/autotest_common.sh@930 -- # kill -0 1562419 00:21:53.569 20:16:51 -- common/autotest_common.sh@931 -- # uname 00:21:53.569 20:16:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:53.569 20:16:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1562419 00:21:53.569 20:16:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:53.570 20:16:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:53.570 20:16:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1562419' 00:21:53.570 killing process with pid 1562419 00:21:53.570 20:16:51 -- common/autotest_common.sh@945 -- # kill 1562419 00:21:53.570 Received shutdown signal, test time was about 10.000000 seconds 00:21:53.570 00:21:53.570 Latency(us) 00:21:53.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.570 =================================================================================================================== 00:21:53.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.570 20:16:51 -- common/autotest_common.sh@950 -- # wait 1562419 00:21:53.827 20:16:51 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:54.084 20:16:51 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:54.084 20:16:51 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:21:54.084 20:16:51 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:21:54.084 20:16:51 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:21:54.084 20:16:51 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:54.341 [2024-04-25 20:16:52.048079] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:54.341 20:16:52 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:54.341 20:16:52 -- common/autotest_common.sh@640 -- # local es=0 00:21:54.341 20:16:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:54.341 20:16:52 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:54.341 20:16:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:54.341 20:16:52 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:54.341 20:16:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:54.341 20:16:52 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:54.341 20:16:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:54.341 20:16:52 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:54.341 20:16:52 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:21:54.341 20:16:52 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:54.341 request: 00:21:54.341 { 00:21:54.341 "uuid": "97045d38-0938-414b-9a4e-108dc6551ad9", 00:21:54.341 "method": "bdev_lvol_get_lvstores", 00:21:54.341 "req_id": 1 00:21:54.341 } 00:21:54.341 Got JSON-RPC error response 00:21:54.341 response: 00:21:54.341 { 00:21:54.341 "code": -19, 00:21:54.341 "message": "No such device" 00:21:54.341 } 00:21:54.341 20:16:52 -- common/autotest_common.sh@643 -- # es=1 00:21:54.341 20:16:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:54.341 20:16:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:54.341 20:16:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:54.341 20:16:52 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:54.600 aio_bdev 00:21:54.600 20:16:52 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev e9833752-403c-44e5-b328-d0f31dc0c2e9 00:21:54.600 20:16:52 -- common/autotest_common.sh@887 -- # local bdev_name=e9833752-403c-44e5-b328-d0f31dc0c2e9 00:21:54.600 20:16:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:54.600 20:16:52 -- common/autotest_common.sh@889 -- # local i 00:21:54.600 20:16:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:54.600 20:16:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:54.600 20:16:52 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:54.600 20:16:52 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9833752-403c-44e5-b328-d0f31dc0c2e9 -t 2000 00:21:54.860 [ 00:21:54.860 { 00:21:54.860 "name": "e9833752-403c-44e5-b328-d0f31dc0c2e9", 00:21:54.860 "aliases": [ 00:21:54.860 "lvs/lvol" 00:21:54.860 ], 00:21:54.860 "product_name": "Logical Volume", 00:21:54.860 "block_size": 4096, 00:21:54.860 "num_blocks": 38912, 00:21:54.860 "uuid": "e9833752-403c-44e5-b328-d0f31dc0c2e9", 00:21:54.860 "assigned_rate_limits": { 00:21:54.860 "rw_ios_per_sec": 0, 00:21:54.860 "rw_mbytes_per_sec": 0, 00:21:54.860 "r_mbytes_per_sec": 0, 00:21:54.860 "w_mbytes_per_sec": 0 00:21:54.860 }, 00:21:54.860 "claimed": false, 00:21:54.860 "zoned": false, 00:21:54.860 "supported_io_types": { 00:21:54.860 "read": true, 00:21:54.860 "write": true, 00:21:54.860 "unmap": true, 00:21:54.860 "write_zeroes": true, 00:21:54.860 "flush": false, 00:21:54.860 "reset": true, 00:21:54.860 "compare": false, 00:21:54.860 "compare_and_write": false, 00:21:54.860 "abort": false, 00:21:54.860 "nvme_admin": false, 00:21:54.860 "nvme_io": false 00:21:54.860 }, 00:21:54.860 "driver_specific": { 00:21:54.860 "lvol": { 00:21:54.860 "lvol_store_uuid": "97045d38-0938-414b-9a4e-108dc6551ad9", 00:21:54.860 "base_bdev": "aio_bdev", 00:21:54.860 "thin_provision": false, 00:21:54.860 "snapshot": false, 00:21:54.860 "clone": false, 00:21:54.860 "esnap_clone": false 00:21:54.860 } 00:21:54.860 } 00:21:54.860 } 00:21:54.860 ] 00:21:54.860 20:16:52 -- common/autotest_common.sh@895 -- # return 0 00:21:54.860 20:16:52 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:54.860 20:16:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:21:54.860 20:16:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:21:54.860 20:16:52 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:54.860 20:16:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:21:55.118 20:16:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:21:55.118 20:16:52 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9833752-403c-44e5-b328-d0f31dc0c2e9 00:21:55.118 20:16:53 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97045d38-0938-414b-9a4e-108dc6551ad9 00:21:55.375 20:16:53 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:55.633 00:21:55.633 real 0m14.859s 00:21:55.633 user 0m14.448s 00:21:55.633 sys 0m1.211s 00:21:55.633 20:16:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.633 20:16:53 -- common/autotest_common.sh@10 -- # set +x 00:21:55.633 ************************************ 00:21:55.633 END TEST lvs_grow_clean 00:21:55.633 ************************************ 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:21:55.633 20:16:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:55.633 20:16:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:55.633 20:16:53 -- common/autotest_common.sh@10 -- # set +x 00:21:55.633 ************************************ 00:21:55.633 START TEST lvs_grow_dirty 00:21:55.633 ************************************ 00:21:55.633 20:16:53 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:55.633 20:16:53 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:55.890 20:16:53 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:55.890 20:16:53 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:55.890 20:16:53 -- target/nvmf_lvs_grow.sh@28 -- # lvs=327e5675-11e2-4897-a4da-72d01fd88571 00:21:55.890 20:16:53 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:21:55.890 20:16:53 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:56.148 20:16:53 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:56.148 20:16:53 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:56.148 20:16:53 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 327e5675-11e2-4897-a4da-72d01fd88571 lvol 150 00:21:56.148 20:16:54 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a40d2779-de98-4fb0-8d42-3a212d4f5468 00:21:56.148 20:16:54 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:56.148 20:16:54 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:56.407 [2024-04-25 20:16:54.129290] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:56.407 [2024-04-25 20:16:54.129359] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:56.407 true 00:21:56.407 20:16:54 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:21:56.407 20:16:54 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:56.408 20:16:54 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:56.408 20:16:54 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:56.667 20:16:54 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a40d2779-de98-4fb0-8d42-3a212d4f5468 00:21:56.668 20:16:54 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:56.929 20:16:54 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:56.929 20:16:54 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1565396 00:21:56.929 20:16:54 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.929 20:16:54 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1565396 /var/tmp/bdevperf.sock 00:21:56.929 20:16:54 -- common/autotest_common.sh@819 -- # '[' -z 1565396 ']' 00:21:56.929 20:16:54 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:56.929 20:16:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.929 20:16:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.929 20:16:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.929 20:16:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.929 20:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:56.929 [2024-04-25 20:16:54.858869] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:56.929 [2024-04-25 20:16:54.858995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565396 ] 00:21:57.189 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.189 [2024-04-25 20:16:54.973899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.189 [2024-04-25 20:16:55.062336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.753 20:16:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:57.753 20:16:55 -- common/autotest_common.sh@852 -- # return 0 00:21:57.753 20:16:55 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:58.010 Nvme0n1 00:21:58.010 20:16:55 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:58.010 [ 00:21:58.010 { 00:21:58.010 "name": "Nvme0n1", 00:21:58.010 "aliases": [ 00:21:58.010 "a40d2779-de98-4fb0-8d42-3a212d4f5468" 00:21:58.010 ], 00:21:58.010 "product_name": "NVMe disk", 00:21:58.010 "block_size": 4096, 00:21:58.010 "num_blocks": 38912, 00:21:58.010 "uuid": "a40d2779-de98-4fb0-8d42-3a212d4f5468", 00:21:58.010 "assigned_rate_limits": { 00:21:58.010 "rw_ios_per_sec": 0, 00:21:58.010 "rw_mbytes_per_sec": 0, 00:21:58.010 "r_mbytes_per_sec": 0, 00:21:58.010 "w_mbytes_per_sec": 0 00:21:58.010 }, 00:21:58.010 "claimed": false, 00:21:58.010 "zoned": false, 00:21:58.010 "supported_io_types": { 00:21:58.010 "read": true, 00:21:58.010 "write": true, 00:21:58.010 "unmap": true, 00:21:58.010 "write_zeroes": true, 00:21:58.010 "flush": true, 00:21:58.010 "reset": true, 00:21:58.010 "compare": true, 00:21:58.010 "compare_and_write": true, 00:21:58.010 "abort": true, 00:21:58.010 "nvme_admin": true, 00:21:58.010 "nvme_io": true 00:21:58.010 }, 00:21:58.010 "driver_specific": { 00:21:58.010 "nvme": [ 00:21:58.010 { 00:21:58.010 "trid": { 00:21:58.010 "trtype": "TCP", 00:21:58.010 "adrfam": "IPv4", 00:21:58.010 "traddr": "10.0.0.2", 00:21:58.010 "trsvcid": "4420", 00:21:58.010 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:58.010 }, 00:21:58.010 "ctrlr_data": { 00:21:58.010 "cntlid": 1, 00:21:58.010 "vendor_id": "0x8086", 00:21:58.010 "model_number": "SPDK bdev Controller", 00:21:58.010 "serial_number": "SPDK0", 00:21:58.010 "firmware_revision": "24.01.1", 00:21:58.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:58.010 "oacs": { 00:21:58.010 "security": 0, 00:21:58.010 "format": 0, 00:21:58.010 "firmware": 0, 00:21:58.010 "ns_manage": 0 00:21:58.010 }, 00:21:58.010 "multi_ctrlr": true, 00:21:58.010 "ana_reporting": false 00:21:58.010 }, 00:21:58.010 "vs": { 00:21:58.010 "nvme_version": "1.3" 00:21:58.010 }, 00:21:58.010 "ns_data": { 00:21:58.010 "id": 1, 00:21:58.010 "can_share": true 00:21:58.010 } 00:21:58.010 } 00:21:58.010 ], 00:21:58.010 "mp_policy": "active_passive" 00:21:58.010 } 00:21:58.010 } 00:21:58.010 ] 00:21:58.010 20:16:55 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1565490 00:21:58.010 20:16:55 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:58.010 20:16:55 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:58.268 Running I/O for 10 seconds... 00:21:59.203 Latency(us) 00:21:59.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:59.203 Nvme0n1 : 1.00 24061.00 93.99 0.00 0.00 0.00 0.00 0.00 00:21:59.203 =================================================================================================================== 00:21:59.203 Total : 24061.00 93.99 0.00 0.00 0.00 0.00 0.00 00:21:59.203 00:22:00.137 20:16:57 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:00.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:00.137 Nvme0n1 : 2.00 24057.00 93.97 0.00 0.00 0.00 0.00 0.00 00:22:00.137 =================================================================================================================== 00:22:00.137 Total : 24057.00 93.97 0.00 0.00 0.00 0.00 0.00 00:22:00.137 00:22:00.137 true 00:22:00.137 20:16:58 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:00.137 20:16:58 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:22:00.396 20:16:58 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:22:00.396 20:16:58 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:22:00.397 20:16:58 -- target/nvmf_lvs_grow.sh@65 -- # wait 1565490 00:22:01.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:01.333 Nvme0n1 : 3.00 24129.00 94.25 0.00 0.00 0.00 0.00 0.00 00:22:01.333 =================================================================================================================== 00:22:01.333 Total : 24129.00 94.25 0.00 0.00 0.00 0.00 0.00 00:22:01.333 00:22:02.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:02.271 Nvme0n1 : 4.00 24204.00 94.55 0.00 0.00 0.00 0.00 0.00 00:22:02.272 =================================================================================================================== 00:22:02.272 Total : 24204.00 94.55 0.00 0.00 0.00 0.00 0.00 00:22:02.272 00:22:03.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:03.209 Nvme0n1 : 5.00 24188.60 94.49 0.00 0.00 0.00 0.00 0.00 00:22:03.209 =================================================================================================================== 00:22:03.209 Total : 24188.60 94.49 0.00 0.00 0.00 0.00 0.00 00:22:03.209 00:22:04.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:04.147 Nvme0n1 : 6.00 24220.67 94.61 0.00 0.00 0.00 0.00 0.00 00:22:04.147 =================================================================================================================== 00:22:04.147 Total : 24220.67 94.61 0.00 0.00 0.00 0.00 0.00 00:22:04.147 00:22:05.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:05.082 Nvme0n1 : 7.00 24237.14 94.68 0.00 0.00 0.00 0.00 0.00 00:22:05.082 =================================================================================================================== 00:22:05.082 Total : 24237.14 94.68 0.00 0.00 0.00 0.00 0.00 00:22:05.082 00:22:06.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:06.466 Nvme0n1 : 8.00 24262.12 94.77 0.00 0.00 0.00 0.00 0.00 00:22:06.466 =================================================================================================================== 00:22:06.466 Total : 24262.12 94.77 0.00 0.00 0.00 0.00 0.00 00:22:06.466 00:22:07.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:07.400 Nvme0n1 : 9.00 24282.44 94.85 0.00 0.00 0.00 0.00 0.00 00:22:07.400 =================================================================================================================== 00:22:07.400 Total : 24282.44 94.85 0.00 0.00 0.00 0.00 0.00 00:22:07.400 00:22:08.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:08.336 Nvme0n1 : 10.00 24285.60 94.87 0.00 0.00 0.00 0.00 0.00 00:22:08.336 =================================================================================================================== 00:22:08.336 Total : 24285.60 94.87 0.00 0.00 0.00 0.00 0.00 00:22:08.336 00:22:08.336 00:22:08.336 Latency(us) 00:22:08.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:08.337 Nvme0n1 : 10.00 24287.70 94.87 0.00 0.00 5267.20 3242.31 13038.21 00:22:08.337 =================================================================================================================== 00:22:08.337 Total : 24287.70 94.87 0.00 0.00 5267.20 3242.31 13038.21 00:22:08.337 0 00:22:08.337 20:17:05 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1565396 00:22:08.337 20:17:05 -- common/autotest_common.sh@926 -- # '[' -z 1565396 ']' 00:22:08.337 20:17:05 -- common/autotest_common.sh@930 -- # kill -0 1565396 00:22:08.337 20:17:06 -- common/autotest_common.sh@931 -- # uname 00:22:08.337 20:17:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:08.337 20:17:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1565396 00:22:08.337 20:17:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:08.337 20:17:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:08.337 20:17:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1565396' 00:22:08.337 killing process with pid 1565396 00:22:08.337 20:17:06 -- common/autotest_common.sh@945 -- # kill 1565396 00:22:08.337 Received shutdown signal, test time was about 10.000000 seconds 00:22:08.337 00:22:08.337 Latency(us) 00:22:08.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.337 =================================================================================================================== 00:22:08.337 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.337 20:17:06 -- common/autotest_common.sh@950 -- # wait 1565396 00:22:08.594 20:17:06 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:08.852 20:17:06 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:08.852 20:17:06 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:22:08.852 20:17:06 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:22:08.852 20:17:06 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:22:08.852 20:17:06 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1561885 00:22:08.852 20:17:06 -- target/nvmf_lvs_grow.sh@74 -- # wait 1561885 00:22:08.852 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1561885 Killed "${NVMF_APP[@]}" "$@" 00:22:08.852 20:17:06 -- target/nvmf_lvs_grow.sh@74 -- # true 00:22:08.852 20:17:06 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:22:08.852 20:17:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:08.852 20:17:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:08.852 20:17:06 -- common/autotest_common.sh@10 -- # set +x 00:22:08.852 20:17:06 -- nvmf/common.sh@469 -- # nvmfpid=1567591 00:22:08.852 20:17:06 -- nvmf/common.sh@470 -- # waitforlisten 1567591 00:22:08.852 20:17:06 -- common/autotest_common.sh@819 -- # '[' -z 1567591 ']' 00:22:08.852 20:17:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.852 20:17:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:08.852 20:17:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.852 20:17:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:08.852 20:17:06 -- common/autotest_common.sh@10 -- # set +x 00:22:08.852 20:17:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:09.110 [2024-04-25 20:17:06.807607] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:09.110 [2024-04-25 20:17:06.807719] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.110 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.110 [2024-04-25 20:17:06.919283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.110 [2024-04-25 20:17:07.013791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:09.110 [2024-04-25 20:17:07.014003] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.110 [2024-04-25 20:17:07.014019] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.110 [2024-04-25 20:17:07.014030] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.110 [2024-04-25 20:17:07.014058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.681 20:17:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:09.681 20:17:07 -- common/autotest_common.sh@852 -- # return 0 00:22:09.681 20:17:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:09.681 20:17:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:09.681 20:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:09.681 20:17:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.681 20:17:07 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:09.942 [2024-04-25 20:17:07.671044] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:09.942 [2024-04-25 20:17:07.671169] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:09.942 [2024-04-25 20:17:07.671197] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:09.942 20:17:07 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:22:09.942 20:17:07 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev a40d2779-de98-4fb0-8d42-3a212d4f5468 00:22:09.942 20:17:07 -- common/autotest_common.sh@887 -- # local bdev_name=a40d2779-de98-4fb0-8d42-3a212d4f5468 00:22:09.942 20:17:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:09.942 20:17:07 -- common/autotest_common.sh@889 -- # local i 00:22:09.942 20:17:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:09.942 20:17:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:09.942 20:17:07 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:09.942 20:17:07 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a40d2779-de98-4fb0-8d42-3a212d4f5468 -t 2000 00:22:10.290 [ 00:22:10.290 { 00:22:10.290 "name": "a40d2779-de98-4fb0-8d42-3a212d4f5468", 00:22:10.290 "aliases": [ 00:22:10.290 "lvs/lvol" 00:22:10.290 ], 00:22:10.290 "product_name": "Logical Volume", 00:22:10.290 "block_size": 4096, 00:22:10.290 "num_blocks": 38912, 00:22:10.290 "uuid": "a40d2779-de98-4fb0-8d42-3a212d4f5468", 00:22:10.290 "assigned_rate_limits": { 00:22:10.290 "rw_ios_per_sec": 0, 00:22:10.290 "rw_mbytes_per_sec": 0, 00:22:10.290 "r_mbytes_per_sec": 0, 00:22:10.290 "w_mbytes_per_sec": 0 00:22:10.290 }, 00:22:10.290 "claimed": false, 00:22:10.290 "zoned": false, 00:22:10.290 "supported_io_types": { 00:22:10.290 "read": true, 00:22:10.290 "write": true, 00:22:10.290 "unmap": true, 00:22:10.290 "write_zeroes": true, 00:22:10.290 "flush": false, 00:22:10.290 "reset": true, 00:22:10.290 "compare": false, 00:22:10.290 "compare_and_write": false, 00:22:10.290 "abort": false, 00:22:10.290 "nvme_admin": false, 00:22:10.290 "nvme_io": false 00:22:10.290 }, 00:22:10.290 "driver_specific": { 00:22:10.290 "lvol": { 00:22:10.290 "lvol_store_uuid": "327e5675-11e2-4897-a4da-72d01fd88571", 00:22:10.290 "base_bdev": "aio_bdev", 00:22:10.290 "thin_provision": false, 00:22:10.290 "snapshot": false, 00:22:10.290 "clone": false, 00:22:10.290 "esnap_clone": false 00:22:10.290 } 00:22:10.290 } 00:22:10.290 } 00:22:10.290 ] 00:22:10.290 20:17:07 -- common/autotest_common.sh@895 -- # return 0 00:22:10.290 20:17:07 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:10.290 20:17:07 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:22:10.290 20:17:08 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:22:10.290 20:17:08 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:10.290 20:17:08 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:22:10.549 20:17:08 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:22:10.549 20:17:08 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:10.549 [2024-04-25 20:17:08.353288] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:22:10.549 20:17:08 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:10.549 20:17:08 -- common/autotest_common.sh@640 -- # local es=0 00:22:10.549 20:17:08 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:10.549 20:17:08 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:10.549 20:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.549 20:17:08 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:10.549 20:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.549 20:17:08 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:10.549 20:17:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.549 20:17:08 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:10.549 20:17:08 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:22:10.549 20:17:08 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:10.807 request: 00:22:10.807 { 00:22:10.807 "uuid": "327e5675-11e2-4897-a4da-72d01fd88571", 00:22:10.807 "method": "bdev_lvol_get_lvstores", 00:22:10.807 "req_id": 1 00:22:10.807 } 00:22:10.807 Got JSON-RPC error response 00:22:10.807 response: 00:22:10.807 { 00:22:10.807 "code": -19, 00:22:10.807 "message": "No such device" 00:22:10.807 } 00:22:10.807 20:17:08 -- common/autotest_common.sh@643 -- # es=1 00:22:10.807 20:17:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:10.807 20:17:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:10.807 20:17:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:10.807 20:17:08 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:10.807 aio_bdev 00:22:10.807 20:17:08 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a40d2779-de98-4fb0-8d42-3a212d4f5468 00:22:10.807 20:17:08 -- common/autotest_common.sh@887 -- # local bdev_name=a40d2779-de98-4fb0-8d42-3a212d4f5468 00:22:10.807 20:17:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:10.807 20:17:08 -- common/autotest_common.sh@889 -- # local i 00:22:10.807 20:17:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:10.807 20:17:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:10.807 20:17:08 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:11.067 20:17:08 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a40d2779-de98-4fb0-8d42-3a212d4f5468 -t 2000 00:22:11.067 [ 00:22:11.067 { 00:22:11.067 "name": "a40d2779-de98-4fb0-8d42-3a212d4f5468", 00:22:11.067 "aliases": [ 00:22:11.067 "lvs/lvol" 00:22:11.067 ], 00:22:11.067 "product_name": "Logical Volume", 00:22:11.067 "block_size": 4096, 00:22:11.067 "num_blocks": 38912, 00:22:11.067 "uuid": "a40d2779-de98-4fb0-8d42-3a212d4f5468", 00:22:11.067 "assigned_rate_limits": { 00:22:11.067 "rw_ios_per_sec": 0, 00:22:11.067 "rw_mbytes_per_sec": 0, 00:22:11.067 "r_mbytes_per_sec": 0, 00:22:11.067 "w_mbytes_per_sec": 0 00:22:11.067 }, 00:22:11.067 "claimed": false, 00:22:11.067 "zoned": false, 00:22:11.067 "supported_io_types": { 00:22:11.067 "read": true, 00:22:11.067 "write": true, 00:22:11.067 "unmap": true, 00:22:11.067 "write_zeroes": true, 00:22:11.067 "flush": false, 00:22:11.067 "reset": true, 00:22:11.067 "compare": false, 00:22:11.067 "compare_and_write": false, 00:22:11.067 "abort": false, 00:22:11.067 "nvme_admin": false, 00:22:11.067 "nvme_io": false 00:22:11.067 }, 00:22:11.067 "driver_specific": { 00:22:11.067 "lvol": { 00:22:11.067 "lvol_store_uuid": "327e5675-11e2-4897-a4da-72d01fd88571", 00:22:11.067 "base_bdev": "aio_bdev", 00:22:11.067 "thin_provision": false, 00:22:11.067 "snapshot": false, 00:22:11.067 "clone": false, 00:22:11.067 "esnap_clone": false 00:22:11.067 } 00:22:11.067 } 00:22:11.067 } 00:22:11.067 ] 00:22:11.067 20:17:08 -- common/autotest_common.sh@895 -- # return 0 00:22:11.067 20:17:08 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:11.067 20:17:08 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:22:11.329 20:17:09 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:22:11.329 20:17:09 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:11.329 20:17:09 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:22:11.329 20:17:09 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:22:11.329 20:17:09 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a40d2779-de98-4fb0-8d42-3a212d4f5468 00:22:11.591 20:17:09 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 327e5675-11e2-4897-a4da-72d01fd88571 00:22:11.591 20:17:09 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:11.852 20:17:09 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:11.852 00:22:11.852 real 0m16.233s 00:22:11.852 user 0m42.330s 00:22:11.852 sys 0m3.057s 00:22:11.852 20:17:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.852 20:17:09 -- common/autotest_common.sh@10 -- # set +x 00:22:11.852 ************************************ 00:22:11.852 END TEST lvs_grow_dirty 00:22:11.852 ************************************ 00:22:11.852 20:17:09 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:22:11.852 20:17:09 -- common/autotest_common.sh@796 -- # type=--id 00:22:11.852 20:17:09 -- common/autotest_common.sh@797 -- # id=0 00:22:11.852 20:17:09 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:11.852 20:17:09 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:11.852 20:17:09 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:11.852 20:17:09 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:11.852 20:17:09 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:11.852 20:17:09 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:11.852 nvmf_trace.0 00:22:11.852 20:17:09 -- common/autotest_common.sh@811 -- # return 0 00:22:11.852 20:17:09 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:22:11.852 20:17:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:11.852 20:17:09 -- nvmf/common.sh@116 -- # sync 00:22:11.852 20:17:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:11.852 20:17:09 -- nvmf/common.sh@119 -- # set +e 00:22:11.852 20:17:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:11.852 20:17:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:11.852 rmmod nvme_tcp 00:22:11.852 rmmod nvme_fabrics 00:22:11.852 rmmod nvme_keyring 00:22:11.852 20:17:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:11.852 20:17:09 -- nvmf/common.sh@123 -- # set -e 00:22:11.852 20:17:09 -- nvmf/common.sh@124 -- # return 0 00:22:11.852 20:17:09 -- nvmf/common.sh@477 -- # '[' -n 1567591 ']' 00:22:11.852 20:17:09 -- nvmf/common.sh@478 -- # killprocess 1567591 00:22:11.852 20:17:09 -- common/autotest_common.sh@926 -- # '[' -z 1567591 ']' 00:22:11.852 20:17:09 -- common/autotest_common.sh@930 -- # kill -0 1567591 00:22:11.852 20:17:09 -- common/autotest_common.sh@931 -- # uname 00:22:11.852 20:17:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:11.852 20:17:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1567591 00:22:12.111 20:17:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:12.111 20:17:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:12.111 20:17:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1567591' 00:22:12.111 killing process with pid 1567591 00:22:12.111 20:17:09 -- common/autotest_common.sh@945 -- # kill 1567591 00:22:12.111 20:17:09 -- common/autotest_common.sh@950 -- # wait 1567591 00:22:12.677 20:17:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:12.677 20:17:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:12.677 20:17:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:12.677 20:17:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.677 20:17:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:12.677 20:17:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.677 20:17:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.677 20:17:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.585 20:17:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:14.585 00:22:14.585 real 0m40.118s 00:22:14.585 user 1m1.985s 00:22:14.585 sys 0m8.486s 00:22:14.585 20:17:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.585 20:17:12 -- common/autotest_common.sh@10 -- # set +x 00:22:14.585 ************************************ 00:22:14.585 END TEST nvmf_lvs_grow 00:22:14.585 ************************************ 00:22:14.585 20:17:12 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:14.585 20:17:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:14.585 20:17:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:14.585 20:17:12 -- common/autotest_common.sh@10 -- # set +x 00:22:14.585 ************************************ 00:22:14.585 START TEST nvmf_bdev_io_wait 00:22:14.585 ************************************ 00:22:14.585 20:17:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:14.585 * Looking for test storage... 00:22:14.585 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:14.585 20:17:12 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.585 20:17:12 -- nvmf/common.sh@7 -- # uname -s 00:22:14.585 20:17:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.585 20:17:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.585 20:17:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.585 20:17:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.585 20:17:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.585 20:17:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.585 20:17:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.585 20:17:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.585 20:17:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.585 20:17:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.585 20:17:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:14.585 20:17:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:14.585 20:17:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.585 20:17:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.585 20:17:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:14.585 20:17:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:14.585 20:17:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.585 20:17:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.585 20:17:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.585 20:17:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.585 20:17:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.585 20:17:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.585 20:17:12 -- paths/export.sh@5 -- # export PATH 00:22:14.585 20:17:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.585 20:17:12 -- nvmf/common.sh@46 -- # : 0 00:22:14.586 20:17:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:14.586 20:17:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:14.586 20:17:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:14.586 20:17:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.586 20:17:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.586 20:17:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:14.586 20:17:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:14.586 20:17:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:14.586 20:17:12 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:14.586 20:17:12 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:14.586 20:17:12 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:22:14.586 20:17:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:14.586 20:17:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.586 20:17:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:14.586 20:17:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:14.586 20:17:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:14.586 20:17:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.586 20:17:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.586 20:17:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.586 20:17:12 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:14.586 20:17:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:14.586 20:17:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:14.586 20:17:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.858 20:17:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:19.858 20:17:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:19.858 20:17:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:19.858 20:17:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:19.858 20:17:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:19.858 20:17:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:19.858 20:17:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:19.858 20:17:17 -- nvmf/common.sh@294 -- # net_devs=() 00:22:19.858 20:17:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:19.858 20:17:17 -- nvmf/common.sh@295 -- # e810=() 00:22:19.858 20:17:17 -- nvmf/common.sh@295 -- # local -ga e810 00:22:19.858 20:17:17 -- nvmf/common.sh@296 -- # x722=() 00:22:19.858 20:17:17 -- nvmf/common.sh@296 -- # local -ga x722 00:22:19.858 20:17:17 -- nvmf/common.sh@297 -- # mlx=() 00:22:19.858 20:17:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:19.858 20:17:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.858 20:17:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:19.858 20:17:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:19.858 20:17:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:19.858 20:17:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:19.858 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:19.858 20:17:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:19.858 20:17:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:19.858 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:19.858 20:17:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:19.858 20:17:17 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:19.858 20:17:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.858 20:17:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:19.858 20:17:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.858 20:17:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:19.858 Found net devices under 0000:27:00.0: cvl_0_0 00:22:19.858 20:17:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.858 20:17:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:19.858 20:17:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.858 20:17:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:19.858 20:17:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.858 20:17:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:19.858 Found net devices under 0000:27:00.1: cvl_0_1 00:22:19.858 20:17:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.858 20:17:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:19.858 20:17:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:19.858 20:17:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:19.858 20:17:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:19.858 20:17:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.858 20:17:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.858 20:17:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.858 20:17:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:19.858 20:17:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.858 20:17:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.858 20:17:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:19.859 20:17:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.859 20:17:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.859 20:17:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:19.859 20:17:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:19.859 20:17:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.859 20:17:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.859 20:17:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.859 20:17:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.859 20:17:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:19.859 20:17:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.859 20:17:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.859 20:17:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.859 20:17:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:19.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:22:19.859 00:22:19.859 --- 10.0.0.2 ping statistics --- 00:22:19.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.859 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:19.859 20:17:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:22:19.859 00:22:19.859 --- 10.0.0.1 ping statistics --- 00:22:19.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.859 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:22:19.859 20:17:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.859 20:17:17 -- nvmf/common.sh@410 -- # return 0 00:22:19.859 20:17:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:19.859 20:17:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.859 20:17:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:19.859 20:17:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:19.859 20:17:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.859 20:17:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:19.859 20:17:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:19.859 20:17:17 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:19.859 20:17:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:19.859 20:17:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:19.859 20:17:17 -- common/autotest_common.sh@10 -- # set +x 00:22:19.859 20:17:17 -- nvmf/common.sh@469 -- # nvmfpid=1572299 00:22:19.859 20:17:17 -- nvmf/common.sh@470 -- # waitforlisten 1572299 00:22:19.859 20:17:17 -- common/autotest_common.sh@819 -- # '[' -z 1572299 ']' 00:22:19.859 20:17:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.859 20:17:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.859 20:17:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.859 20:17:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.859 20:17:17 -- common/autotest_common.sh@10 -- # set +x 00:22:19.859 20:17:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:20.117 [2024-04-25 20:17:17.804029] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:20.117 [2024-04-25 20:17:17.804130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.117 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.117 [2024-04-25 20:17:17.923479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:20.117 [2024-04-25 20:17:18.016083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:20.117 [2024-04-25 20:17:18.016261] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.117 [2024-04-25 20:17:18.016273] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.117 [2024-04-25 20:17:18.016284] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.117 [2024-04-25 20:17:18.016482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.117 [2024-04-25 20:17:18.016560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.117 [2024-04-25 20:17:18.016610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.117 [2024-04-25 20:17:18.016622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.687 20:17:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.687 20:17:18 -- common/autotest_common.sh@852 -- # return 0 00:22:20.687 20:17:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:20.687 20:17:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:20.687 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.687 20:17:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.687 20:17:18 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:22:20.687 20:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.687 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.687 20:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.687 20:17:18 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:22:20.687 20:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.687 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 20:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.949 20:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.949 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 [2024-04-25 20:17:18.664543] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.949 20:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:20.949 20:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.949 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 Malloc0 00:22:20.949 20:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.949 20:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.949 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 20:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:20.949 20:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.949 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 20:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.949 20:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.949 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.949 [2024-04-25 20:17:18.737333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.949 20:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1572455 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@30 -- # READ_PID=1572456 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1572458 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1572460 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@35 -- # sync 00:22:20.949 20:17:18 -- nvmf/common.sh@520 -- # config=() 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:22:20.949 20:17:18 -- nvmf/common.sh@520 -- # local subsystem config 00:22:20.949 20:17:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:20.949 20:17:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:20.949 { 00:22:20.949 "params": { 00:22:20.949 "name": "Nvme$subsystem", 00:22:20.949 "trtype": "$TEST_TRANSPORT", 00:22:20.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.949 "adrfam": "ipv4", 00:22:20.949 "trsvcid": "$NVMF_PORT", 00:22:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.949 "hdgst": ${hdgst:-false}, 00:22:20.949 "ddgst": ${ddgst:-false} 00:22:20.949 }, 00:22:20.949 "method": "bdev_nvme_attach_controller" 00:22:20.949 } 00:22:20.949 EOF 00:22:20.949 )") 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:22:20.949 20:17:18 -- nvmf/common.sh@520 -- # config=() 00:22:20.949 20:17:18 -- nvmf/common.sh@520 -- # local subsystem config 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:22:20.949 20:17:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:22:20.949 20:17:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:20.949 { 00:22:20.949 "params": { 00:22:20.949 "name": "Nvme$subsystem", 00:22:20.949 "trtype": "$TEST_TRANSPORT", 00:22:20.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.949 "adrfam": "ipv4", 00:22:20.949 "trsvcid": "$NVMF_PORT", 00:22:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.949 "hdgst": ${hdgst:-false}, 00:22:20.949 "ddgst": ${ddgst:-false} 00:22:20.949 }, 00:22:20.949 "method": "bdev_nvme_attach_controller" 00:22:20.949 } 00:22:20.949 EOF 00:22:20.949 )") 00:22:20.949 20:17:18 -- nvmf/common.sh@520 -- # config=() 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:22:20.949 20:17:18 -- nvmf/common.sh@520 -- # local subsystem config 00:22:20.949 20:17:18 -- nvmf/common.sh@520 -- # config=() 00:22:20.949 20:17:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:20.949 20:17:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:20.949 { 00:22:20.949 "params": { 00:22:20.949 "name": "Nvme$subsystem", 00:22:20.949 "trtype": "$TEST_TRANSPORT", 00:22:20.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.949 "adrfam": "ipv4", 00:22:20.949 "trsvcid": "$NVMF_PORT", 00:22:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.949 "hdgst": ${hdgst:-false}, 00:22:20.949 "ddgst": ${ddgst:-false} 00:22:20.949 }, 00:22:20.949 "method": "bdev_nvme_attach_controller" 00:22:20.949 } 00:22:20.949 EOF 00:22:20.949 )") 00:22:20.949 20:17:18 -- nvmf/common.sh@542 -- # cat 00:22:20.949 20:17:18 -- nvmf/common.sh@520 -- # local subsystem config 00:22:20.949 20:17:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:20.949 20:17:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:20.949 { 00:22:20.949 "params": { 00:22:20.949 "name": "Nvme$subsystem", 00:22:20.949 "trtype": "$TEST_TRANSPORT", 00:22:20.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.949 "adrfam": "ipv4", 00:22:20.949 "trsvcid": "$NVMF_PORT", 00:22:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.949 "hdgst": ${hdgst:-false}, 00:22:20.949 "ddgst": ${ddgst:-false} 00:22:20.949 }, 00:22:20.949 "method": "bdev_nvme_attach_controller" 00:22:20.949 } 00:22:20.949 EOF 00:22:20.949 )") 00:22:20.949 20:17:18 -- target/bdev_io_wait.sh@37 -- # wait 1572455 00:22:20.949 20:17:18 -- nvmf/common.sh@542 -- # cat 00:22:20.949 20:17:18 -- nvmf/common.sh@542 -- # cat 00:22:20.949 20:17:18 -- nvmf/common.sh@542 -- # cat 00:22:20.949 20:17:18 -- nvmf/common.sh@544 -- # jq . 00:22:20.949 20:17:18 -- nvmf/common.sh@544 -- # jq . 00:22:20.949 20:17:18 -- nvmf/common.sh@544 -- # jq . 00:22:20.949 20:17:18 -- nvmf/common.sh@545 -- # IFS=, 00:22:20.949 20:17:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:20.949 "params": { 00:22:20.949 "name": "Nvme1", 00:22:20.949 "trtype": "tcp", 00:22:20.949 "traddr": "10.0.0.2", 00:22:20.949 "adrfam": "ipv4", 00:22:20.949 "trsvcid": "4420", 00:22:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.949 "hdgst": false, 00:22:20.949 "ddgst": false 00:22:20.949 }, 00:22:20.949 "method": "bdev_nvme_attach_controller" 00:22:20.949 }' 00:22:20.949 20:17:18 -- nvmf/common.sh@544 -- # jq . 00:22:20.949 20:17:18 -- nvmf/common.sh@545 -- # IFS=, 00:22:20.949 20:17:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:20.949 "params": { 00:22:20.949 "name": "Nvme1", 00:22:20.949 "trtype": "tcp", 00:22:20.949 "traddr": "10.0.0.2", 00:22:20.949 "adrfam": "ipv4", 00:22:20.949 "trsvcid": "4420", 00:22:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.949 "hdgst": false, 00:22:20.949 "ddgst": false 00:22:20.949 }, 00:22:20.949 "method": "bdev_nvme_attach_controller" 00:22:20.949 }' 00:22:20.949 20:17:18 -- nvmf/common.sh@545 -- # IFS=, 00:22:20.949 20:17:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:20.949 "params": { 00:22:20.949 "name": "Nvme1", 00:22:20.949 "trtype": "tcp", 00:22:20.949 "traddr": "10.0.0.2", 00:22:20.949 "adrfam": "ipv4", 00:22:20.949 "trsvcid": "4420", 00:22:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.949 "hdgst": false, 00:22:20.949 "ddgst": false 00:22:20.949 }, 00:22:20.949 "method": "bdev_nvme_attach_controller" 00:22:20.949 }' 00:22:20.949 20:17:18 -- nvmf/common.sh@545 -- # IFS=, 00:22:20.949 20:17:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:20.949 "params": { 00:22:20.949 "name": "Nvme1", 00:22:20.949 "trtype": "tcp", 00:22:20.949 "traddr": "10.0.0.2", 00:22:20.949 "adrfam": "ipv4", 00:22:20.949 "trsvcid": "4420", 00:22:20.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.949 "hdgst": false, 00:22:20.949 "ddgst": false 00:22:20.949 }, 00:22:20.949 "method": "bdev_nvme_attach_controller" 00:22:20.949 }' 00:22:20.949 [2024-04-25 20:17:18.795261] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:20.949 [2024-04-25 20:17:18.795338] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:22:20.949 [2024-04-25 20:17:18.814553] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:20.949 [2024-04-25 20:17:18.814665] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:22:20.949 [2024-04-25 20:17:18.826774] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:20.950 [2024-04-25 20:17:18.826920] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:22:20.950 [2024-04-25 20:17:18.827104] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:20.950 [2024-04-25 20:17:18.827252] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:20.950 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.211 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.211 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.211 [2024-04-25 20:17:18.984113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.211 [2024-04-25 20:17:19.021104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.211 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.211 [2024-04-25 20:17:19.117702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.211 [2024-04-25 20:17:19.120329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:21.472 [2024-04-25 20:17:19.160356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:21.472 [2024-04-25 20:17:19.222784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.472 [2024-04-25 20:17:19.245657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:21.472 [2024-04-25 20:17:19.361008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:21.733 Running I/O for 1 seconds... 00:22:21.733 Running I/O for 1 seconds... 00:22:21.733 Running I/O for 1 seconds... 00:22:21.994 Running I/O for 1 seconds... 00:22:22.565 00:22:22.565 Latency(us) 00:22:22.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.565 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:22:22.565 Nvme1n1 : 1.00 15513.70 60.60 0.00 0.00 8227.46 4415.06 13728.07 00:22:22.565 =================================================================================================================== 00:22:22.565 Total : 15513.70 60.60 0.00 0.00 8227.46 4415.06 13728.07 00:22:22.824 00:22:22.824 Latency(us) 00:22:22.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.824 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:22:22.824 Nvme1n1 : 1.00 197591.69 771.84 0.00 0.00 645.31 230.67 1647.02 00:22:22.824 =================================================================================================================== 00:22:22.824 Total : 197591.69 771.84 0.00 0.00 645.31 230.67 1647.02 00:22:22.824 00:22:22.824 Latency(us) 00:22:22.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.824 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:22:22.824 Nvme1n1 : 1.00 12123.51 47.36 0.00 0.00 10524.79 4587.52 19867.76 00:22:22.824 =================================================================================================================== 00:22:22.824 Total : 12123.51 47.36 0.00 0.00 10524.79 4587.52 19867.76 00:22:22.824 00:22:22.824 Latency(us) 00:22:22.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.824 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:22:22.824 Nvme1n1 : 1.00 12259.46 47.89 0.00 0.00 10408.72 4897.95 21799.34 00:22:22.824 =================================================================================================================== 00:22:22.824 Total : 12259.46 47.89 0.00 0.00 10408.72 4897.95 21799.34 00:22:23.398 20:17:21 -- target/bdev_io_wait.sh@38 -- # wait 1572456 00:22:23.398 20:17:21 -- target/bdev_io_wait.sh@39 -- # wait 1572458 00:22:23.398 20:17:21 -- target/bdev_io_wait.sh@40 -- # wait 1572460 00:22:23.398 20:17:21 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.398 20:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.398 20:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:23.398 20:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.398 20:17:21 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:22:23.398 20:17:21 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:22:23.398 20:17:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:23.398 20:17:21 -- nvmf/common.sh@116 -- # sync 00:22:23.398 20:17:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:23.398 20:17:21 -- nvmf/common.sh@119 -- # set +e 00:22:23.398 20:17:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:23.398 20:17:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:23.398 rmmod nvme_tcp 00:22:23.398 rmmod nvme_fabrics 00:22:23.398 rmmod nvme_keyring 00:22:23.398 20:17:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:23.658 20:17:21 -- nvmf/common.sh@123 -- # set -e 00:22:23.659 20:17:21 -- nvmf/common.sh@124 -- # return 0 00:22:23.659 20:17:21 -- nvmf/common.sh@477 -- # '[' -n 1572299 ']' 00:22:23.659 20:17:21 -- nvmf/common.sh@478 -- # killprocess 1572299 00:22:23.659 20:17:21 -- common/autotest_common.sh@926 -- # '[' -z 1572299 ']' 00:22:23.659 20:17:21 -- common/autotest_common.sh@930 -- # kill -0 1572299 00:22:23.659 20:17:21 -- common/autotest_common.sh@931 -- # uname 00:22:23.659 20:17:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:23.659 20:17:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1572299 00:22:23.659 20:17:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:23.659 20:17:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:23.659 20:17:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1572299' 00:22:23.659 killing process with pid 1572299 00:22:23.659 20:17:21 -- common/autotest_common.sh@945 -- # kill 1572299 00:22:23.659 20:17:21 -- common/autotest_common.sh@950 -- # wait 1572299 00:22:23.919 20:17:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:23.919 20:17:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:23.919 20:17:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:23.919 20:17:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.919 20:17:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:23.919 20:17:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.919 20:17:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.919 20:17:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.483 20:17:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:26.483 00:22:26.483 real 0m11.507s 00:22:26.483 user 0m23.485s 00:22:26.483 sys 0m5.810s 00:22:26.483 20:17:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.483 20:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:26.483 ************************************ 00:22:26.483 END TEST nvmf_bdev_io_wait 00:22:26.483 ************************************ 00:22:26.483 20:17:23 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:26.483 20:17:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:26.483 20:17:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:26.483 20:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:26.483 ************************************ 00:22:26.483 START TEST nvmf_queue_depth 00:22:26.483 ************************************ 00:22:26.483 20:17:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:26.483 * Looking for test storage... 00:22:26.483 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:26.483 20:17:24 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.483 20:17:24 -- nvmf/common.sh@7 -- # uname -s 00:22:26.483 20:17:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.483 20:17:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.483 20:17:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.483 20:17:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.483 20:17:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.483 20:17:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.483 20:17:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.483 20:17:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.483 20:17:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.483 20:17:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.483 20:17:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:26.483 20:17:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:26.483 20:17:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.483 20:17:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.483 20:17:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:26.483 20:17:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:26.483 20:17:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.483 20:17:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.483 20:17:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.483 20:17:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.483 20:17:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.483 20:17:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.483 20:17:24 -- paths/export.sh@5 -- # export PATH 00:22:26.483 20:17:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.483 20:17:24 -- nvmf/common.sh@46 -- # : 0 00:22:26.483 20:17:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:26.483 20:17:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:26.483 20:17:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:26.483 20:17:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.483 20:17:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.483 20:17:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:26.483 20:17:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:26.484 20:17:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:26.484 20:17:24 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:22:26.484 20:17:24 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:22:26.484 20:17:24 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.484 20:17:24 -- target/queue_depth.sh@19 -- # nvmftestinit 00:22:26.484 20:17:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:26.484 20:17:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.484 20:17:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:26.484 20:17:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:26.484 20:17:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:26.484 20:17:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.484 20:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.484 20:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.484 20:17:24 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:26.484 20:17:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:26.484 20:17:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:26.484 20:17:24 -- common/autotest_common.sh@10 -- # set +x 00:22:31.754 20:17:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:31.754 20:17:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:31.754 20:17:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:31.754 20:17:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:31.754 20:17:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:31.754 20:17:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:31.754 20:17:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:31.754 20:17:29 -- nvmf/common.sh@294 -- # net_devs=() 00:22:31.754 20:17:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:31.754 20:17:29 -- nvmf/common.sh@295 -- # e810=() 00:22:31.754 20:17:29 -- nvmf/common.sh@295 -- # local -ga e810 00:22:31.754 20:17:29 -- nvmf/common.sh@296 -- # x722=() 00:22:31.754 20:17:29 -- nvmf/common.sh@296 -- # local -ga x722 00:22:31.754 20:17:29 -- nvmf/common.sh@297 -- # mlx=() 00:22:31.754 20:17:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:31.754 20:17:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.754 20:17:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:31.754 20:17:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:31.754 20:17:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:31.754 20:17:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:31.754 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:31.754 20:17:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:31.754 20:17:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:31.754 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:31.754 20:17:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:31.754 20:17:29 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:31.754 20:17:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:31.754 20:17:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.754 20:17:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:31.754 20:17:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.754 20:17:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:31.754 Found net devices under 0000:27:00.0: cvl_0_0 00:22:31.754 20:17:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.754 20:17:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:31.754 20:17:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.754 20:17:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:31.754 20:17:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.754 20:17:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:31.754 Found net devices under 0000:27:00.1: cvl_0_1 00:22:31.754 20:17:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.754 20:17:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:31.754 20:17:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:31.755 20:17:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:31.755 20:17:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:31.755 20:17:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:31.755 20:17:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.755 20:17:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.755 20:17:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.755 20:17:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:31.755 20:17:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.755 20:17:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.755 20:17:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:31.755 20:17:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.755 20:17:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.755 20:17:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:31.755 20:17:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:31.755 20:17:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.755 20:17:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.755 20:17:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.755 20:17:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.755 20:17:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:31.755 20:17:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.755 20:17:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.755 20:17:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.755 20:17:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:31.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:22:31.755 00:22:31.755 --- 10.0.0.2 ping statistics --- 00:22:31.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.755 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:22:31.755 20:17:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:22:31.755 00:22:31.755 --- 10.0.0.1 ping statistics --- 00:22:31.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.755 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:22:31.755 20:17:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.755 20:17:29 -- nvmf/common.sh@410 -- # return 0 00:22:31.755 20:17:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:31.755 20:17:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.755 20:17:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:31.755 20:17:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:31.755 20:17:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.755 20:17:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:31.755 20:17:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:31.755 20:17:29 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:22:31.755 20:17:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:31.755 20:17:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:31.755 20:17:29 -- common/autotest_common.sh@10 -- # set +x 00:22:31.755 20:17:29 -- nvmf/common.sh@469 -- # nvmfpid=1576970 00:22:31.755 20:17:29 -- nvmf/common.sh@470 -- # waitforlisten 1576970 00:22:31.755 20:17:29 -- common/autotest_common.sh@819 -- # '[' -z 1576970 ']' 00:22:31.755 20:17:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.755 20:17:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:31.755 20:17:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.755 20:17:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:31.755 20:17:29 -- common/autotest_common.sh@10 -- # set +x 00:22:31.755 20:17:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:31.755 [2024-04-25 20:17:29.656251] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:31.755 [2024-04-25 20:17:29.656385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.015 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.015 [2024-04-25 20:17:29.795449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.015 [2024-04-25 20:17:29.896959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:32.015 [2024-04-25 20:17:29.897168] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.015 [2024-04-25 20:17:29.897185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.015 [2024-04-25 20:17:29.897196] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.015 [2024-04-25 20:17:29.897233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.582 20:17:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:32.582 20:17:30 -- common/autotest_common.sh@852 -- # return 0 00:22:32.582 20:17:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:32.582 20:17:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:32.582 20:17:30 -- common/autotest_common.sh@10 -- # set +x 00:22:32.582 20:17:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.582 20:17:30 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.582 20:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.582 20:17:30 -- common/autotest_common.sh@10 -- # set +x 00:22:32.582 [2024-04-25 20:17:30.384116] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.582 20:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.582 20:17:30 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:32.582 20:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.582 20:17:30 -- common/autotest_common.sh@10 -- # set +x 00:22:32.582 Malloc0 00:22:32.582 20:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.582 20:17:30 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:32.582 20:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.582 20:17:30 -- common/autotest_common.sh@10 -- # set +x 00:22:32.582 20:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.582 20:17:30 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:32.582 20:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.582 20:17:30 -- common/autotest_common.sh@10 -- # set +x 00:22:32.582 20:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.582 20:17:30 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.582 20:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.582 20:17:30 -- common/autotest_common.sh@10 -- # set +x 00:22:32.582 [2024-04-25 20:17:30.452756] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.582 20:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.582 20:17:30 -- target/queue_depth.sh@30 -- # bdevperf_pid=1577282 00:22:32.582 20:17:30 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.582 20:17:30 -- target/queue_depth.sh@33 -- # waitforlisten 1577282 /var/tmp/bdevperf.sock 00:22:32.582 20:17:30 -- common/autotest_common.sh@819 -- # '[' -z 1577282 ']' 00:22:32.582 20:17:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.582 20:17:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.582 20:17:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.582 20:17:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.582 20:17:30 -- common/autotest_common.sh@10 -- # set +x 00:22:32.582 20:17:30 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:22:32.840 [2024-04-25 20:17:30.523092] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:32.841 [2024-04-25 20:17:30.523201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577282 ] 00:22:32.841 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.841 [2024-04-25 20:17:30.636831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.841 [2024-04-25 20:17:30.732821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.410 20:17:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:33.410 20:17:31 -- common/autotest_common.sh@852 -- # return 0 00:22:33.410 20:17:31 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:33.410 20:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.410 20:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:33.410 NVMe0n1 00:22:33.410 20:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.410 20:17:31 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:33.671 Running I/O for 10 seconds... 00:22:43.685 00:22:43.685 Latency(us) 00:22:43.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.685 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:22:43.685 Verification LBA range: start 0x0 length 0x4000 00:22:43.685 NVMe0n1 : 10.05 18155.16 70.92 0.00 0.00 56243.20 9037.07 44150.57 00:22:43.685 =================================================================================================================== 00:22:43.685 Total : 18155.16 70.92 0.00 0.00 56243.20 9037.07 44150.57 00:22:43.685 0 00:22:43.685 20:17:41 -- target/queue_depth.sh@39 -- # killprocess 1577282 00:22:43.685 20:17:41 -- common/autotest_common.sh@926 -- # '[' -z 1577282 ']' 00:22:43.685 20:17:41 -- common/autotest_common.sh@930 -- # kill -0 1577282 00:22:43.685 20:17:41 -- common/autotest_common.sh@931 -- # uname 00:22:43.685 20:17:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:43.685 20:17:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1577282 00:22:43.685 20:17:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:43.685 20:17:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:43.685 20:17:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1577282' 00:22:43.685 killing process with pid 1577282 00:22:43.685 20:17:41 -- common/autotest_common.sh@945 -- # kill 1577282 00:22:43.685 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.685 00:22:43.685 Latency(us) 00:22:43.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.685 =================================================================================================================== 00:22:43.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.685 20:17:41 -- common/autotest_common.sh@950 -- # wait 1577282 00:22:43.944 20:17:41 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:43.944 20:17:41 -- target/queue_depth.sh@43 -- # nvmftestfini 00:22:43.944 20:17:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:43.944 20:17:41 -- nvmf/common.sh@116 -- # sync 00:22:43.944 20:17:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:43.944 20:17:41 -- nvmf/common.sh@119 -- # set +e 00:22:43.944 20:17:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:43.944 20:17:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:43.944 rmmod nvme_tcp 00:22:43.944 rmmod nvme_fabrics 00:22:44.204 rmmod nvme_keyring 00:22:44.204 20:17:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:44.204 20:17:41 -- nvmf/common.sh@123 -- # set -e 00:22:44.204 20:17:41 -- nvmf/common.sh@124 -- # return 0 00:22:44.204 20:17:41 -- nvmf/common.sh@477 -- # '[' -n 1576970 ']' 00:22:44.204 20:17:41 -- nvmf/common.sh@478 -- # killprocess 1576970 00:22:44.204 20:17:41 -- common/autotest_common.sh@926 -- # '[' -z 1576970 ']' 00:22:44.204 20:17:41 -- common/autotest_common.sh@930 -- # kill -0 1576970 00:22:44.204 20:17:41 -- common/autotest_common.sh@931 -- # uname 00:22:44.204 20:17:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:44.204 20:17:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1576970 00:22:44.205 20:17:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:44.205 20:17:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:44.205 20:17:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1576970' 00:22:44.205 killing process with pid 1576970 00:22:44.205 20:17:41 -- common/autotest_common.sh@945 -- # kill 1576970 00:22:44.205 20:17:41 -- common/autotest_common.sh@950 -- # wait 1576970 00:22:44.776 20:17:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:44.776 20:17:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:44.776 20:17:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:44.776 20:17:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.776 20:17:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:44.776 20:17:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.776 20:17:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.776 20:17:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.681 20:17:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:46.681 00:22:46.681 real 0m20.599s 00:22:46.681 user 0m25.252s 00:22:46.681 sys 0m5.278s 00:22:46.681 20:17:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.681 20:17:44 -- common/autotest_common.sh@10 -- # set +x 00:22:46.681 ************************************ 00:22:46.681 END TEST nvmf_queue_depth 00:22:46.681 ************************************ 00:22:46.681 20:17:44 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:46.681 20:17:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:46.681 20:17:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:46.681 20:17:44 -- common/autotest_common.sh@10 -- # set +x 00:22:46.681 ************************************ 00:22:46.681 START TEST nvmf_multipath 00:22:46.681 ************************************ 00:22:46.681 20:17:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:46.941 * Looking for test storage... 00:22:46.941 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:46.941 20:17:44 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.941 20:17:44 -- nvmf/common.sh@7 -- # uname -s 00:22:46.941 20:17:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.941 20:17:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.941 20:17:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.941 20:17:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.941 20:17:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.941 20:17:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.941 20:17:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.941 20:17:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.941 20:17:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.941 20:17:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.941 20:17:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:46.941 20:17:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:46.941 20:17:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.941 20:17:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.941 20:17:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:46.941 20:17:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:46.941 20:17:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.941 20:17:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.941 20:17:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.941 20:17:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.941 20:17:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.941 20:17:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.941 20:17:44 -- paths/export.sh@5 -- # export PATH 00:22:46.941 20:17:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.941 20:17:44 -- nvmf/common.sh@46 -- # : 0 00:22:46.941 20:17:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:46.941 20:17:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:46.941 20:17:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:46.941 20:17:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.941 20:17:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.941 20:17:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:46.941 20:17:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:46.941 20:17:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:46.941 20:17:44 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:46.941 20:17:44 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:46.941 20:17:44 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:46.941 20:17:44 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:46.941 20:17:44 -- target/multipath.sh@43 -- # nvmftestinit 00:22:46.941 20:17:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:46.941 20:17:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.941 20:17:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:46.941 20:17:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:46.941 20:17:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:46.941 20:17:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.941 20:17:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.941 20:17:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.941 20:17:44 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:46.941 20:17:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:46.941 20:17:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:46.941 20:17:44 -- common/autotest_common.sh@10 -- # set +x 00:22:53.513 20:17:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:53.513 20:17:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:53.513 20:17:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:53.513 20:17:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:53.513 20:17:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:53.513 20:17:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:53.513 20:17:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:53.513 20:17:50 -- nvmf/common.sh@294 -- # net_devs=() 00:22:53.513 20:17:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:53.513 20:17:50 -- nvmf/common.sh@295 -- # e810=() 00:22:53.513 20:17:50 -- nvmf/common.sh@295 -- # local -ga e810 00:22:53.513 20:17:50 -- nvmf/common.sh@296 -- # x722=() 00:22:53.513 20:17:50 -- nvmf/common.sh@296 -- # local -ga x722 00:22:53.513 20:17:50 -- nvmf/common.sh@297 -- # mlx=() 00:22:53.513 20:17:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:53.513 20:17:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.513 20:17:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:53.513 20:17:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:53.513 20:17:50 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:53.513 20:17:50 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:53.514 20:17:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:53.514 20:17:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:53.514 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:53.514 20:17:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:53.514 20:17:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:53.514 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:53.514 20:17:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:53.514 20:17:50 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:53.514 20:17:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.514 20:17:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:53.514 20:17:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.514 20:17:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:53.514 Found net devices under 0000:27:00.0: cvl_0_0 00:22:53.514 20:17:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.514 20:17:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:53.514 20:17:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.514 20:17:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:53.514 20:17:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.514 20:17:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:53.514 Found net devices under 0000:27:00.1: cvl_0_1 00:22:53.514 20:17:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.514 20:17:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:53.514 20:17:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:53.514 20:17:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:53.514 20:17:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.514 20:17:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.514 20:17:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.514 20:17:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:53.514 20:17:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.514 20:17:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.514 20:17:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:53.514 20:17:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.514 20:17:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.514 20:17:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:53.514 20:17:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:53.514 20:17:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.514 20:17:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.514 20:17:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.514 20:17:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.514 20:17:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:53.514 20:17:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.514 20:17:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.514 20:17:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.514 20:17:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:53.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:22:53.514 00:22:53.514 --- 10.0.0.2 ping statistics --- 00:22:53.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.514 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:22:53.514 20:17:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:22:53.514 00:22:53.514 --- 10.0.0.1 ping statistics --- 00:22:53.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.514 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:22:53.514 20:17:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.514 20:17:50 -- nvmf/common.sh@410 -- # return 0 00:22:53.514 20:17:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:53.514 20:17:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.514 20:17:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.514 20:17:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:53.514 20:17:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:53.514 20:17:50 -- target/multipath.sh@45 -- # '[' -z ']' 00:22:53.514 20:17:50 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:22:53.514 only one NIC for nvmf test 00:22:53.514 20:17:50 -- target/multipath.sh@47 -- # nvmftestfini 00:22:53.514 20:17:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:53.514 20:17:50 -- nvmf/common.sh@116 -- # sync 00:22:53.514 20:17:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:53.514 20:17:50 -- nvmf/common.sh@119 -- # set +e 00:22:53.514 20:17:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:53.514 20:17:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:53.514 rmmod nvme_tcp 00:22:53.514 rmmod nvme_fabrics 00:22:53.514 rmmod nvme_keyring 00:22:53.514 20:17:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:53.514 20:17:50 -- nvmf/common.sh@123 -- # set -e 00:22:53.514 20:17:50 -- nvmf/common.sh@124 -- # return 0 00:22:53.514 20:17:50 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:53.514 20:17:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:53.514 20:17:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:53.514 20:17:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.514 20:17:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:53.514 20:17:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.514 20:17:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.514 20:17:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.899 20:17:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:54.899 20:17:52 -- target/multipath.sh@48 -- # exit 0 00:22:54.899 20:17:52 -- target/multipath.sh@1 -- # nvmftestfini 00:22:54.899 20:17:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:54.899 20:17:52 -- nvmf/common.sh@116 -- # sync 00:22:54.899 20:17:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:54.899 20:17:52 -- nvmf/common.sh@119 -- # set +e 00:22:54.899 20:17:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:54.899 20:17:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:54.899 20:17:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:54.899 20:17:52 -- nvmf/common.sh@123 -- # set -e 00:22:54.899 20:17:52 -- nvmf/common.sh@124 -- # return 0 00:22:54.899 20:17:52 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:54.899 20:17:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:54.899 20:17:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:54.899 20:17:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:54.899 20:17:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.899 20:17:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:54.899 20:17:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.899 20:17:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.899 20:17:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.899 20:17:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:54.899 00:22:54.899 real 0m8.145s 00:22:54.899 user 0m1.587s 00:22:54.899 sys 0m4.447s 00:22:54.899 20:17:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.899 20:17:52 -- common/autotest_common.sh@10 -- # set +x 00:22:54.899 ************************************ 00:22:54.899 END TEST nvmf_multipath 00:22:54.899 ************************************ 00:22:54.899 20:17:52 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:54.899 20:17:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:54.899 20:17:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:54.899 20:17:52 -- common/autotest_common.sh@10 -- # set +x 00:22:54.899 ************************************ 00:22:54.899 START TEST nvmf_zcopy 00:22:54.899 ************************************ 00:22:54.899 20:17:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:54.899 * Looking for test storage... 00:22:54.899 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:54.899 20:17:52 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.899 20:17:52 -- nvmf/common.sh@7 -- # uname -s 00:22:54.899 20:17:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.899 20:17:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.899 20:17:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.899 20:17:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.899 20:17:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.899 20:17:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.899 20:17:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.899 20:17:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.899 20:17:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.899 20:17:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.157 20:17:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:55.157 20:17:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:55.157 20:17:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.157 20:17:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.157 20:17:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:55.157 20:17:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:55.157 20:17:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.157 20:17:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.157 20:17:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.157 20:17:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.157 20:17:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.157 20:17:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.157 20:17:52 -- paths/export.sh@5 -- # export PATH 00:22:55.157 20:17:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.158 20:17:52 -- nvmf/common.sh@46 -- # : 0 00:22:55.158 20:17:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:55.158 20:17:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:55.158 20:17:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:55.158 20:17:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.158 20:17:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.158 20:17:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:55.158 20:17:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:55.158 20:17:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:55.158 20:17:52 -- target/zcopy.sh@12 -- # nvmftestinit 00:22:55.158 20:17:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:55.158 20:17:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.158 20:17:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:55.158 20:17:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:55.158 20:17:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:55.158 20:17:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.158 20:17:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.158 20:17:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.158 20:17:52 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:55.158 20:17:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:55.158 20:17:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:55.158 20:17:52 -- common/autotest_common.sh@10 -- # set +x 00:23:00.436 20:17:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:00.437 20:17:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:00.437 20:17:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:00.437 20:17:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:00.437 20:17:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:00.437 20:17:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:00.437 20:17:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:00.437 20:17:57 -- nvmf/common.sh@294 -- # net_devs=() 00:23:00.437 20:17:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:00.437 20:17:57 -- nvmf/common.sh@295 -- # e810=() 00:23:00.437 20:17:57 -- nvmf/common.sh@295 -- # local -ga e810 00:23:00.437 20:17:57 -- nvmf/common.sh@296 -- # x722=() 00:23:00.437 20:17:57 -- nvmf/common.sh@296 -- # local -ga x722 00:23:00.437 20:17:57 -- nvmf/common.sh@297 -- # mlx=() 00:23:00.437 20:17:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:00.437 20:17:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.437 20:17:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:00.437 20:17:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:00.437 20:17:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:00.437 20:17:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:00.437 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:00.437 20:17:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:00.437 20:17:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:00.437 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:00.437 20:17:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:00.437 20:17:57 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:00.437 20:17:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.437 20:17:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:00.437 20:17:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.437 20:17:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:00.437 Found net devices under 0000:27:00.0: cvl_0_0 00:23:00.437 20:17:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.437 20:17:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:00.437 20:17:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.437 20:17:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:00.437 20:17:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.437 20:17:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:00.437 Found net devices under 0000:27:00.1: cvl_0_1 00:23:00.437 20:17:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.437 20:17:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:00.437 20:17:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:00.437 20:17:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:00.437 20:17:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:00.437 20:17:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.437 20:17:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.437 20:17:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.437 20:17:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:00.437 20:17:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.437 20:17:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.437 20:17:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:00.437 20:17:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.437 20:17:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.437 20:17:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:00.437 20:17:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:00.437 20:17:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.437 20:17:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.437 20:17:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.437 20:17:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.437 20:17:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:00.437 20:17:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.437 20:17:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.437 20:17:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.437 20:17:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:00.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:23:00.437 00:23:00.437 --- 10.0.0.2 ping statistics --- 00:23:00.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.437 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:23:00.437 20:17:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:23:00.437 00:23:00.437 --- 10.0.0.1 ping statistics --- 00:23:00.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.437 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:23:00.437 20:17:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.437 20:17:58 -- nvmf/common.sh@410 -- # return 0 00:23:00.437 20:17:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:00.437 20:17:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.437 20:17:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:00.437 20:17:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:00.437 20:17:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.437 20:17:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:00.437 20:17:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:00.437 20:17:58 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:23:00.437 20:17:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:00.437 20:17:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:00.437 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:23:00.437 20:17:58 -- nvmf/common.sh@469 -- # nvmfpid=1587439 00:23:00.437 20:17:58 -- nvmf/common.sh@470 -- # waitforlisten 1587439 00:23:00.437 20:17:58 -- common/autotest_common.sh@819 -- # '[' -z 1587439 ']' 00:23:00.437 20:17:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.437 20:17:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:00.437 20:17:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.437 20:17:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:00.437 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:23:00.437 20:17:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:00.437 [2024-04-25 20:17:58.136213] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:00.437 [2024-04-25 20:17:58.136329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.437 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.438 [2024-04-25 20:17:58.263384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.438 [2024-04-25 20:17:58.359091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:00.438 [2024-04-25 20:17:58.359262] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.438 [2024-04-25 20:17:58.359276] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.438 [2024-04-25 20:17:58.359284] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.438 [2024-04-25 20:17:58.359312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.005 20:17:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:01.005 20:17:58 -- common/autotest_common.sh@852 -- # return 0 00:23:01.005 20:17:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:01.005 20:17:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:01.005 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:23:01.005 20:17:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.006 20:17:58 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:23:01.006 20:17:58 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:23:01.006 20:17:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.006 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 [2024-04-25 20:17:58.849484] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.006 20:17:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.006 20:17:58 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:01.006 20:17:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.006 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 20:17:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.006 20:17:58 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.006 20:17:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.006 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 [2024-04-25 20:17:58.865645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.006 20:17:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.006 20:17:58 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:01.006 20:17:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.006 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 20:17:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.006 20:17:58 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:23:01.006 20:17:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.006 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 malloc0 00:23:01.006 20:17:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.006 20:17:58 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:01.006 20:17:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.006 20:17:58 -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 20:17:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.006 20:17:58 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:23:01.006 20:17:58 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:23:01.006 20:17:58 -- nvmf/common.sh@520 -- # config=() 00:23:01.006 20:17:58 -- nvmf/common.sh@520 -- # local subsystem config 00:23:01.006 20:17:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:01.006 20:17:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:01.006 { 00:23:01.006 "params": { 00:23:01.006 "name": "Nvme$subsystem", 00:23:01.006 "trtype": "$TEST_TRANSPORT", 00:23:01.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.006 "adrfam": "ipv4", 00:23:01.006 "trsvcid": "$NVMF_PORT", 00:23:01.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.006 "hdgst": ${hdgst:-false}, 00:23:01.006 "ddgst": ${ddgst:-false} 00:23:01.006 }, 00:23:01.006 "method": "bdev_nvme_attach_controller" 00:23:01.006 } 00:23:01.006 EOF 00:23:01.006 )") 00:23:01.006 20:17:58 -- nvmf/common.sh@542 -- # cat 00:23:01.006 20:17:58 -- nvmf/common.sh@544 -- # jq . 00:23:01.006 20:17:58 -- nvmf/common.sh@545 -- # IFS=, 00:23:01.006 20:17:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:01.006 "params": { 00:23:01.006 "name": "Nvme1", 00:23:01.006 "trtype": "tcp", 00:23:01.006 "traddr": "10.0.0.2", 00:23:01.006 "adrfam": "ipv4", 00:23:01.006 "trsvcid": "4420", 00:23:01.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.006 "hdgst": false, 00:23:01.006 "ddgst": false 00:23:01.006 }, 00:23:01.006 "method": "bdev_nvme_attach_controller" 00:23:01.006 }' 00:23:01.265 [2024-04-25 20:17:58.992561] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:01.265 [2024-04-25 20:17:58.992668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587509 ] 00:23:01.265 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.265 [2024-04-25 20:17:59.103792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.525 [2024-04-25 20:17:59.198330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.525 Running I/O for 10 seconds... 00:23:11.515 00:23:11.515 Latency(us) 00:23:11.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.516 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:23:11.516 Verification LBA range: start 0x0 length 0x1000 00:23:11.516 Nvme1n1 : 10.01 13116.79 102.47 0.00 0.00 9736.85 1526.30 17936.17 00:23:11.516 =================================================================================================================== 00:23:11.516 Total : 13116.79 102.47 0.00 0.00 9736.85 1526.30 17936.17 00:23:12.138 20:18:09 -- target/zcopy.sh@39 -- # perfpid=1590231 00:23:12.138 20:18:09 -- target/zcopy.sh@41 -- # xtrace_disable 00:23:12.138 20:18:09 -- common/autotest_common.sh@10 -- # set +x 00:23:12.138 20:18:09 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:23:12.138 20:18:09 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:23:12.138 20:18:09 -- nvmf/common.sh@520 -- # config=() 00:23:12.138 20:18:09 -- nvmf/common.sh@520 -- # local subsystem config 00:23:12.138 20:18:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:12.138 20:18:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:12.138 { 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme$subsystem", 00:23:12.138 "trtype": "$TEST_TRANSPORT", 00:23:12.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "$NVMF_PORT", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.138 "hdgst": ${hdgst:-false}, 00:23:12.138 "ddgst": ${ddgst:-false} 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 } 00:23:12.138 EOF 00:23:12.138 )") 00:23:12.138 20:18:09 -- nvmf/common.sh@542 -- # cat 00:23:12.138 20:18:09 -- nvmf/common.sh@544 -- # jq . 00:23:12.138 [2024-04-25 20:18:09.822610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.138 [2024-04-25 20:18:09.822655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.138 20:18:09 -- nvmf/common.sh@545 -- # IFS=, 00:23:12.138 20:18:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme1", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 }' 00:23:12.138 [2024-04-25 20:18:09.830563] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.138 [2024-04-25 20:18:09.830583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.138 [2024-04-25 20:18:09.838538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.138 [2024-04-25 20:18:09.838556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.138 [2024-04-25 20:18:09.846547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.138 [2024-04-25 20:18:09.846565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.138 [2024-04-25 20:18:09.854546] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.138 [2024-04-25 20:18:09.854563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.138 [2024-04-25 20:18:09.862535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.862550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.870551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.870568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.878553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.878568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.880943] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:12.139 [2024-04-25 20:18:09.881052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590231 ] 00:23:12.139 [2024-04-25 20:18:09.886541] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.886558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.894553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.894569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.902544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.902561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.910556] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.910571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.918556] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.918571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.926550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.926565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.934561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.934576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.942560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.942574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.950553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.950570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.139 [2024-04-25 20:18:09.958566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.958580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.966572] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.966587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.974572] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.974586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.982576] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.982591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.990580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.990594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:09.991717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.139 [2024-04-25 20:18:09.998582] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:09.998597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:10.006613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:10.006635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:10.014590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:10.014606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:10.022592] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:10.022607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:10.030599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:10.030615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:10.038603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:10.038619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.139 [2024-04-25 20:18:10.046599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.139 [2024-04-25 20:18:10.046614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.054590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.054605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.062610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.062624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.070603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.070619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.078599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.078613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.086625] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.086640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.094317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.398 [2024-04-25 20:18:10.094604] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.094618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.102612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.102626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.110615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.110630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.118609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.118623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.126620] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.398 [2024-04-25 20:18:10.126634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.398 [2024-04-25 20:18:10.134621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.134635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.142616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.142630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.150630] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.150645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.158633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.158648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.166634] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.166648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.174636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.174650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.182635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.182648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.190668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.190682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.198639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.198653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.206634] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.206647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.214648] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.214662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.222638] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.222652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.230651] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.230665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.238659] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.238677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.246665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.246687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.254682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.254701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.262673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.262691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.270666] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.270684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.278674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.278689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.286667] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.286682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.294681] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.294696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.302684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.302699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.310678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.310697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.318699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.318719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.399 [2024-04-25 20:18:10.326701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.399 [2024-04-25 20:18:10.326721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.658 [2024-04-25 20:18:10.334684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.658 [2024-04-25 20:18:10.334701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.658 [2024-04-25 20:18:10.342735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.658 [2024-04-25 20:18:10.342765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.658 [2024-04-25 20:18:10.350720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.658 [2024-04-25 20:18:10.350737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.658 Running I/O for 5 seconds... 00:23:12.658 [2024-04-25 20:18:10.358896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.658 [2024-04-25 20:18:10.358919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.658 [2024-04-25 20:18:10.368605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.368633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.377644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.377670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.386946] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.386973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.395697] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.395723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.404979] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.405006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.414187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.414212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.423369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.423395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.432425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.432452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.441780] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.441806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.450929] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.450955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.460247] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.460272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.469519] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.469546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.478880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.478905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.488393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.488419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.497344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.497370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.506140] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.506166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.515594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.515620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.524589] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.524614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.533771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.533797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.542426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.542451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.551057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.551081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.560218] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.560242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.569033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.569064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.578256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.578280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.659 [2024-04-25 20:18:10.587751] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.659 [2024-04-25 20:18:10.587776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.596680] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.596705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.605511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.605537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.614861] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.614888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.624163] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.624189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.633484] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.633515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.642841] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.642865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.651731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.651759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.660790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.660815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.669965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.669993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.679381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.679408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.688355] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.688381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.698007] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.698036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.706449] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.706473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.715684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.715711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.724553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.724578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.733915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.733949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.743308] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.743337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.751774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.751803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.760951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.760977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.769733] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.769760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.778397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.778422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.787032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.787059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.795613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.795639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.804868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.804897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.813746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.813771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.822945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.822972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.832257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.832282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.918 [2024-04-25 20:18:10.841907] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.918 [2024-04-25 20:18:10.841934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.850877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.850904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.860011] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.860036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.868958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.868986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.877973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.877999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.886248] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.886274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.895438] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.895465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.904480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.904513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.913852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.913883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.922737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.922766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.932175] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.932201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.941407] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.941432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.950571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.950597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.959982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.960008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.968878] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.968904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.977629] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.977656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.986913] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.986938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:10.995826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:10.995850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:11.004717] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.178 [2024-04-25 20:18:11.004745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.178 [2024-04-25 20:18:11.013884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.013910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.022917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.022944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.032186] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.032212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.041224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.041251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.050108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.050134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.059787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.059812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.068191] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.068216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.077398] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.077424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.086516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.086547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.095505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.095531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.179 [2024-04-25 20:18:11.104874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.179 [2024-04-25 20:18:11.104901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.113233] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.113261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.122645] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.122673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.131031] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.131056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.140366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.140391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.148679] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.148702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.157760] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.157789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.167600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.167630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.175078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.175105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.185567] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.185595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.193706] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.193732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.202925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.202951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.212315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.212343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.221237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.221265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.230534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.230562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.239450] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.239477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.248448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.248474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.257523] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.257552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.266673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.266700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.276070] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.276097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.285348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.285377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.294277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.294301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.303963] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.303993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.313072] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.313104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.322333] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.322362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.331255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.331283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.340546] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.340575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.349857] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.349883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.358108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.358135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.439 [2024-04-25 20:18:11.367350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.439 [2024-04-25 20:18:11.367376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.376121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.376150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.385132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.385158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.394556] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.394583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.403455] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.403480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.412123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.412151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.421373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.421400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.430487] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.430523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.439575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.439601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.448988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.449015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.457726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.457752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.466988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.467016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.476217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.476244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.485270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.485297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.494171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.494197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.503477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.503512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.512170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.512194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.521409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.521435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.530855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.530884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.539827] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.539853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.549028] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.549057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.558488] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.558521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.567629] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.567657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.576501] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.576528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.586115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.586141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.595090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.595114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.604409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.604435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.613588] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.613612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.699 [2024-04-25 20:18:11.622848] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.699 [2024-04-25 20:18:11.622875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.956 [2024-04-25 20:18:11.631715] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.956 [2024-04-25 20:18:11.631744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.641001] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.641027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.649798] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.649823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.658639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.658664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.667390] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.667418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.677111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.677136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.686281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.686304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.695154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.695178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.704922] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.704949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.713780] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.713805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.722521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.722548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.731691] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.731718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.741097] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.741124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.750057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.750082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.758787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.758814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.768177] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.768201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.776895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.776920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.786092] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.786117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.795015] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.795040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.804205] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.804229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.813271] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.813297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.822613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.822637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.831590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.831615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.841027] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.841052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.850478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.850509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.859870] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.859894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.869178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.869203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.877982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.878006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.957 [2024-04-25 20:18:11.887065] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.957 [2024-04-25 20:18:11.887092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.895852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.895883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.905045] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.905072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.914348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.914373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.923590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.923617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.932403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.932428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.941250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.941279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.950548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.950572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.959425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.959451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.968609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.968635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.977711] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.977735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.986933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.986957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:11.995739] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:11.995765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.004927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.004951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.014177] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.014204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.023149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.023173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.032400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.032424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.041656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.041680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.050405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.050429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.059728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.059752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.068618] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.068643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.078038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.078063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.087207] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.087234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.096108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.096133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.105448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.105474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.114789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.114817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.123833] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.123858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.132986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.133009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.215 [2024-04-25 20:18:12.142256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.215 [2024-04-25 20:18:12.142283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.151693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.151717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.160638] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.160665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.170076] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.170102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.179021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.179045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.188181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.188205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.197026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.197049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.206345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.206370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.215619] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.215643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.224837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.224861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.233717] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.233741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.243033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.243056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.252417] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.252443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.261416] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.261441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.270325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.270349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.279537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.279562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.288356] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.288385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.297581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.297606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.306771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.306798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.315683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.315708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.324837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.324864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.334165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.334190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.343256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.343282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.351919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.351944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.361185] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.361211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.370413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.370438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.379358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.379382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.388584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.388608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.397449] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.397473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.406783] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.406807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.485 [2024-04-25 20:18:12.416074] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.485 [2024-04-25 20:18:12.416100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.424816] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.424840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.433403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.433430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.442551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.442574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.451991] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.452018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.461261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.461290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.470428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.470454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.479688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.479719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.488505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.488531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.497488] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.497515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.507072] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.507097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.516452] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.516476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.525583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.525609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.535033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.535058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.543827] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.543853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.552570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.745 [2024-04-25 20:18:12.552595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.745 [2024-04-25 20:18:12.561282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.561307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.570399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.570424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.579886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.579913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.589192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.589217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.598390] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.598416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.607639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.607664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.616786] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.616812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.626197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.626223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.635272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.635298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.644049] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.644086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.653125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.653151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.662270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.662295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.746 [2024-04-25 20:18:12.671227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.746 [2024-04-25 20:18:12.671253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.680391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.680421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.689352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.689380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.698194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.698222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.706901] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.706929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.716132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.716159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.724958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.724987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.734119] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.734147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.742834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.742859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.752132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.752160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.761165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.761195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.768700] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.768726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.778936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.778962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.788324] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.788352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.797319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.797344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.806704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.806745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.816132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.816161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.825207] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.825233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.833933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.833960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.843280] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.843306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.852168] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.852193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.860940] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.860966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.870241] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.870267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.879717] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.879746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.888519] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.888544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.897314] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.897341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.906525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.906552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.915509] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.915536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.924836] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.924862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.006 [2024-04-25 20:18:12.933603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.006 [2024-04-25 20:18:12.933630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:12.943336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:12.943363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:12.952226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:12.952252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:12.961365] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:12.961391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:12.970421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:12.970447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:12.979149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:12.979179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:12.988449] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:12.988477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:12.997298] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:12.997326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.006539] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.006567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.015264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.015291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.023993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.024021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.033301] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.033328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.042411] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.042437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.051238] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.051263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.060532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.060561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.069758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.069785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.078623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.078649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.087529] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.087554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.096840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.096865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.106343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.106372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.115347] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.115373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.124676] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.124705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.134035] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.134061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.142970] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.142997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.152259] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.152285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.161080] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.161108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.170033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.170059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.179228] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.179254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.187985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.188011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.266 [2024-04-25 20:18:13.196914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.266 [2024-04-25 20:18:13.196942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.205561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.205588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.214849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.214874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.224274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.224300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.234015] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.234043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.243332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.243360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.253224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.253252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.261995] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.262021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.271388] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.271417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.280174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.280201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.289536] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.289561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.298719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.298744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.307698] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.307725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.316909] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.316937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.325912] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.325940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.335208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.335234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.344021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.344047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.353075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.353102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.362449] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.362475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.371786] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.371819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.381003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.381029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.389568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.389595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.398485] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.398518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.407897] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.407925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.417347] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.417373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.426599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.426624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.435950] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.435975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.525 [2024-04-25 20:18:13.444794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.525 [2024-04-25 20:18:13.444818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.526 [2024-04-25 20:18:13.454167] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.526 [2024-04-25 20:18:13.454192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.463471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.463510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.472979] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.473005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.481327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.481351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.490505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.490537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.499829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.499855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.509119] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.509144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.517918] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.517944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.527204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.527229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.536534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.536559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.545766] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.545791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.554557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.554581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.563923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.563947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.573433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.573456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.582752] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.582775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.591148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.591170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.599905] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.599931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.609359] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.609384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.618282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.618309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.627010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.627034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.636796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.636823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.646202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.646242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.655391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.793 [2024-04-25 20:18:13.655415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.793 [2024-04-25 20:18:13.663746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.794 [2024-04-25 20:18:13.663774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.794 [2024-04-25 20:18:13.672878] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.794 [2024-04-25 20:18:13.672906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.794 [2024-04-25 20:18:13.682140] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.794 [2024-04-25 20:18:13.682163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.794 [2024-04-25 20:18:13.691559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.794 [2024-04-25 20:18:13.691585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.794 [2024-04-25 20:18:13.700511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.794 [2024-04-25 20:18:13.700535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.794 [2024-04-25 20:18:13.710190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.794 [2024-04-25 20:18:13.710215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.794 [2024-04-25 20:18:13.719781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.794 [2024-04-25 20:18:13.719806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.729241] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.729266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.738579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.738606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.747610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.747636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.756863] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.756886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.766201] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.766227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.775041] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.775066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.785010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.785036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.794307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.794335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.803528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.803553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.812713] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.812737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.822037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.822061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.830793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.830819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.839918] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.839948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.849326] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.849350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.858322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.858349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.867623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.867647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.876611] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.876634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.885601] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.885625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.894959] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.894984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.908982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.909007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.917502] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.917529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.926616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.926640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.935892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.935918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.944845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.944872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.954168] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.954192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.963549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.963576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.972290] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.972315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.052 [2024-04-25 20:18:13.980871] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.052 [2024-04-25 20:18:13.980897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.311 [2024-04-25 20:18:13.990257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:13.990285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:13.999731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:13.999757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.008646] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.008670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.017903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.017928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.027102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.027126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.036636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.036660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.045460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.045484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.054598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.054620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.063672] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.063696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.072439] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.072463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.081668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.081691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.090642] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.090667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.099958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.099982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.109343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.109369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.118581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.118604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.127243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.127268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.136447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.136473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.145198] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.145223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.154307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.154331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.163073] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.163097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.172349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.172373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.181838] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.181865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.191178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.191202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.200713] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.200739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.209510] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.209533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.218685] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.218710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.227409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.227432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.312 [2024-04-25 20:18:14.236467] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.312 [2024-04-25 20:18:14.236504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.245754] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.245778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.254483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.254514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.263696] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.263719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.272686] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.272711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.281888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.281914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.290824] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.290851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.299903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.299927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.308623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.308647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.317870] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.317896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.326820] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.326845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.336048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.336072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.344890] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.344914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.354814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.354838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.363963] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.363990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.371572] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.371598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.381820] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.381847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.391222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.391248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.399544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.399570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.408820] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.408846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.417047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.417072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.426414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.426440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.434734] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.434759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.444091] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.444116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.453094] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.453122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.461942] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.461968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.471145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.471172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.480477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.480511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.489308] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.489334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.572 [2024-04-25 20:18:14.498529] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.572 [2024-04-25 20:18:14.498556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.507691] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.507716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.517159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.517186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.526012] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.526038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.535114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.535142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.544526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.544553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.553447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.553476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.562808] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.562838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.572312] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.572339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.581525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.581551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.590757] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.590783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.599968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.599994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.608789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.608815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.618131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.618162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.627534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.627562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.636956] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.636985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.646087] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.646111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.654967] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.654993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.663705] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.663731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.673110] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.673136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.682003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.682028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.691393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.691419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.700199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.700231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.709380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.709406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.718743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.718772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.727005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.727031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.736411] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.736439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.832 [2024-04-25 20:18:14.745177] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.832 [2024-04-25 20:18:14.745205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.833 [2024-04-25 20:18:14.754366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.833 [2024-04-25 20:18:14.754392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.833 [2024-04-25 20:18:14.763677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.833 [2024-04-25 20:18:14.763705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.772848] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.772876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.781743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.781771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.791125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.791153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.800157] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.800185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.809059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.809085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.818328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.818361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.827654] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.827680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.837150] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.837179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.846373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.846400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.855291] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.855320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.864022] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.864048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.872871] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.872903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.882135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.882160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.890929] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.890957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.899777] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.899803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.908863] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.908888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.918106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.918132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.927584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.927610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.935926] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.935953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.945243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.945268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.954148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.954172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.963359] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.963384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.972207] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.972233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.981075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.981100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.990367] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.990394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:14.999698] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:14.999723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:15.008571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:15.008599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 [2024-04-25 20:18:15.017640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-25 20:18:15.017665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.026465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.026496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.035239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.035265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.044934] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.044963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.054414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.054439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.063435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.063461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.072730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.072758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.081627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.081652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.090900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.090924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.100252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.100277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.109863] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.109891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.118835] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.118860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.127972] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.128000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.137183] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.137208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.146639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.146668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.155521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.155546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.164726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.164753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.174181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.174206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.182993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.183016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.192222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.192249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.201547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.201584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.210420] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.210446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.219567] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.219608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.228433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.228458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.237621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.362 [2024-04-25 20:18:15.237645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.362 [2024-04-25 20:18:15.246938] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.363 [2024-04-25 20:18:15.246965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.363 [2024-04-25 20:18:15.256295] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.363 [2024-04-25 20:18:15.256321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.363 [2024-04-25 20:18:15.265710] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.363 [2024-04-25 20:18:15.265735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.363 [2024-04-25 20:18:15.274565] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.363 [2024-04-25 20:18:15.274591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.363 [2024-04-25 20:18:15.283834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.363 [2024-04-25 20:18:15.283860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.363 [2024-04-25 20:18:15.292683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.363 [2024-04-25 20:18:15.292708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.301961] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.301991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.311170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.311194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.320546] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.320573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.329462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.329487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.338597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.338624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.347436] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.347461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.356413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.356441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 00:23:17.622 Latency(us) 00:23:17.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.622 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:23:17.622 Nvme1n1 : 5.01 18019.01 140.77 0.00 0.00 7097.16 2949.12 14486.91 00:23:17.622 =================================================================================================================== 00:23:17.622 Total : 18019.01 140.77 0.00 0.00 7097.16 2949.12 14486.91 00:23:17.622 [2024-04-25 20:18:15.362264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.362292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.370268] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.370291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.378263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.378281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.386253] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.386267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.394254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.394272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.402248] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.402262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.410259] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.410272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.418258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.418272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.426251] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.426265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.434264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.434278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.442254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.442268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.450272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.450286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.458269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.458283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.466264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.466277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.474281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.474295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.482275] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.482290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.490274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.490287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.498283] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.498297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.506276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.506289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.514299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.514313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.522292] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.522306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.530282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.530296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.538293] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.538307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.622 [2024-04-25 20:18:15.546294] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.622 [2024-04-25 20:18:15.546307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.554292] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.554307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.562298] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.562312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.570312] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.570327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.578307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.578321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.586309] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.586323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.594302] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.594317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.602313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.602326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.610325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.610339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.618325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.618340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.626323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.626337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.634317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.634331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.642329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.642342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.650331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.650345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.658330] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.658345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.666348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.666363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.674332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.674345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.682328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.880 [2024-04-25 20:18:15.682342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.880 [2024-04-25 20:18:15.690363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.881 [2024-04-25 20:18:15.690376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.881 [2024-04-25 20:18:15.698335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.881 [2024-04-25 20:18:15.698350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.881 [2024-04-25 20:18:15.706347] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.881 [2024-04-25 20:18:15.706361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.881 [2024-04-25 20:18:15.714348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.881 [2024-04-25 20:18:15.714362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.881 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1590231) - No such process 00:23:17.881 20:18:15 -- target/zcopy.sh@49 -- # wait 1590231 00:23:17.881 20:18:15 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:17.881 20:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.881 20:18:15 -- common/autotest_common.sh@10 -- # set +x 00:23:17.881 20:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.881 20:18:15 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:17.881 20:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.881 20:18:15 -- common/autotest_common.sh@10 -- # set +x 00:23:17.881 delay0 00:23:17.881 20:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.881 20:18:15 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:23:17.881 20:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.881 20:18:15 -- common/autotest_common.sh@10 -- # set +x 00:23:17.881 20:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.881 20:18:15 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:23:17.881 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.137 [2024-04-25 20:18:15.865749] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:24.710 [2024-04-25 20:18:21.964060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:24.710 [2024-04-25 20:18:21.964111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:24.710 Initializing NVMe Controllers 00:23:24.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:24.710 Initialization complete. Launching workers. 00:23:24.710 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 125 00:23:24.710 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 402, failed to submit 43 00:23:24.710 success 219, unsuccess 183, failed 0 00:23:24.710 20:18:21 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:23:24.710 20:18:21 -- target/zcopy.sh@60 -- # nvmftestfini 00:23:24.710 20:18:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:24.710 20:18:21 -- nvmf/common.sh@116 -- # sync 00:23:24.710 20:18:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:24.710 20:18:21 -- nvmf/common.sh@119 -- # set +e 00:23:24.710 20:18:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:24.710 20:18:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:24.710 rmmod nvme_tcp 00:23:24.710 rmmod nvme_fabrics 00:23:24.710 rmmod nvme_keyring 00:23:24.710 20:18:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:24.710 20:18:22 -- nvmf/common.sh@123 -- # set -e 00:23:24.710 20:18:22 -- nvmf/common.sh@124 -- # return 0 00:23:24.710 20:18:22 -- nvmf/common.sh@477 -- # '[' -n 1587439 ']' 00:23:24.710 20:18:22 -- nvmf/common.sh@478 -- # killprocess 1587439 00:23:24.710 20:18:22 -- common/autotest_common.sh@926 -- # '[' -z 1587439 ']' 00:23:24.710 20:18:22 -- common/autotest_common.sh@930 -- # kill -0 1587439 00:23:24.710 20:18:22 -- common/autotest_common.sh@931 -- # uname 00:23:24.710 20:18:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:24.710 20:18:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1587439 00:23:24.710 20:18:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:24.710 20:18:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:24.710 20:18:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1587439' 00:23:24.710 killing process with pid 1587439 00:23:24.710 20:18:22 -- common/autotest_common.sh@945 -- # kill 1587439 00:23:24.710 20:18:22 -- common/autotest_common.sh@950 -- # wait 1587439 00:23:24.710 20:18:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:24.710 20:18:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:24.710 20:18:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:24.710 20:18:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.710 20:18:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:24.710 20:18:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.710 20:18:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.710 20:18:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.243 20:18:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:27.243 00:23:27.243 real 0m31.871s 00:23:27.243 user 0m46.252s 00:23:27.243 sys 0m7.436s 00:23:27.243 20:18:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.243 20:18:24 -- common/autotest_common.sh@10 -- # set +x 00:23:27.243 ************************************ 00:23:27.243 END TEST nvmf_zcopy 00:23:27.243 ************************************ 00:23:27.243 20:18:24 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:27.243 20:18:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:27.243 20:18:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:27.243 20:18:24 -- common/autotest_common.sh@10 -- # set +x 00:23:27.243 ************************************ 00:23:27.243 START TEST nvmf_nmic 00:23:27.243 ************************************ 00:23:27.243 20:18:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:27.243 * Looking for test storage... 00:23:27.243 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:27.243 20:18:24 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.243 20:18:24 -- nvmf/common.sh@7 -- # uname -s 00:23:27.243 20:18:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.243 20:18:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.243 20:18:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.243 20:18:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.243 20:18:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.243 20:18:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.243 20:18:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.243 20:18:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.243 20:18:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.243 20:18:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.243 20:18:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:27.243 20:18:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:27.243 20:18:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.243 20:18:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.243 20:18:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:27.243 20:18:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:27.243 20:18:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.243 20:18:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.243 20:18:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.243 20:18:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.244 20:18:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.244 20:18:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.244 20:18:24 -- paths/export.sh@5 -- # export PATH 00:23:27.244 20:18:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.244 20:18:24 -- nvmf/common.sh@46 -- # : 0 00:23:27.244 20:18:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:27.244 20:18:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:27.244 20:18:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:27.244 20:18:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.244 20:18:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.244 20:18:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:27.244 20:18:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:27.244 20:18:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:27.244 20:18:24 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:27.244 20:18:24 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:27.244 20:18:24 -- target/nmic.sh@14 -- # nvmftestinit 00:23:27.244 20:18:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:27.244 20:18:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.244 20:18:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:27.244 20:18:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:27.244 20:18:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:27.244 20:18:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.244 20:18:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.244 20:18:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.244 20:18:24 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:27.244 20:18:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:27.244 20:18:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:27.244 20:18:24 -- common/autotest_common.sh@10 -- # set +x 00:23:32.533 20:18:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:32.533 20:18:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:32.533 20:18:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:32.533 20:18:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:32.533 20:18:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:32.533 20:18:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:32.533 20:18:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:32.533 20:18:29 -- nvmf/common.sh@294 -- # net_devs=() 00:23:32.533 20:18:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:32.533 20:18:29 -- nvmf/common.sh@295 -- # e810=() 00:23:32.533 20:18:29 -- nvmf/common.sh@295 -- # local -ga e810 00:23:32.533 20:18:29 -- nvmf/common.sh@296 -- # x722=() 00:23:32.533 20:18:29 -- nvmf/common.sh@296 -- # local -ga x722 00:23:32.533 20:18:29 -- nvmf/common.sh@297 -- # mlx=() 00:23:32.533 20:18:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:32.533 20:18:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.533 20:18:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:32.533 20:18:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:32.533 20:18:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.533 20:18:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:32.533 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:32.533 20:18:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.533 20:18:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:32.533 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:32.533 20:18:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:32.533 20:18:29 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:32.533 20:18:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.533 20:18:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.533 20:18:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.533 20:18:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.533 20:18:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:32.534 Found net devices under 0000:27:00.0: cvl_0_0 00:23:32.534 20:18:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.534 20:18:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.534 20:18:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.534 20:18:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.534 20:18:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.534 20:18:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:32.534 Found net devices under 0000:27:00.1: cvl_0_1 00:23:32.534 20:18:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.534 20:18:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:32.534 20:18:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:32.534 20:18:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:32.534 20:18:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:32.534 20:18:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:32.534 20:18:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.534 20:18:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.534 20:18:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.534 20:18:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:32.534 20:18:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.534 20:18:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.534 20:18:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:32.534 20:18:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.534 20:18:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.534 20:18:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:32.534 20:18:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:32.534 20:18:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.534 20:18:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.534 20:18:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.534 20:18:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.534 20:18:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:32.534 20:18:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.534 20:18:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.534 20:18:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.534 20:18:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:32.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:23:32.534 00:23:32.534 --- 10.0.0.2 ping statistics --- 00:23:32.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.534 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:23:32.534 20:18:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:23:32.534 00:23:32.534 --- 10.0.0.1 ping statistics --- 00:23:32.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.534 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:23:32.534 20:18:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.534 20:18:30 -- nvmf/common.sh@410 -- # return 0 00:23:32.534 20:18:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:32.534 20:18:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.534 20:18:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:32.534 20:18:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:32.534 20:18:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.534 20:18:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:32.534 20:18:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:32.534 20:18:30 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:23:32.534 20:18:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:32.534 20:18:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:32.534 20:18:30 -- common/autotest_common.sh@10 -- # set +x 00:23:32.534 20:18:30 -- nvmf/common.sh@469 -- # nvmfpid=1596691 00:23:32.534 20:18:30 -- nvmf/common.sh@470 -- # waitforlisten 1596691 00:23:32.534 20:18:30 -- common/autotest_common.sh@819 -- # '[' -z 1596691 ']' 00:23:32.534 20:18:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.534 20:18:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:32.534 20:18:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.534 20:18:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:32.534 20:18:30 -- common/autotest_common.sh@10 -- # set +x 00:23:32.534 20:18:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:32.534 [2024-04-25 20:18:30.307659] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:32.534 [2024-04-25 20:18:30.307789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.534 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.534 [2024-04-25 20:18:30.445061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.793 [2024-04-25 20:18:30.544672] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:32.793 [2024-04-25 20:18:30.544867] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.793 [2024-04-25 20:18:30.544882] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.793 [2024-04-25 20:18:30.544892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.793 [2024-04-25 20:18:30.545054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.793 [2024-04-25 20:18:30.545149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.793 [2024-04-25 20:18:30.545249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.793 [2024-04-25 20:18:30.545260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.361 20:18:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:33.361 20:18:31 -- common/autotest_common.sh@852 -- # return 0 00:23:33.361 20:18:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:33.361 20:18:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:33.361 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.361 20:18:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.361 20:18:31 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.361 20:18:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.361 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.361 [2024-04-25 20:18:31.034024] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.361 20:18:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.361 20:18:31 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:33.361 20:18:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.361 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.361 Malloc0 00:23:33.361 20:18:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.361 20:18:31 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:33.361 20:18:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.361 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.361 20:18:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.361 20:18:31 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.361 20:18:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.361 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.361 20:18:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.361 20:18:31 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.361 20:18:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.361 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.361 [2024-04-25 20:18:31.103568] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.361 20:18:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.361 20:18:31 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:23:33.361 test case1: single bdev can't be used in multiple subsystems 00:23:33.361 20:18:31 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:33.362 20:18:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.362 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.362 20:18:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.362 20:18:31 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:33.362 20:18:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.362 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.362 20:18:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.362 20:18:31 -- target/nmic.sh@28 -- # nmic_status=0 00:23:33.362 20:18:31 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:23:33.362 20:18:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.362 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.362 [2024-04-25 20:18:31.127346] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:23:33.362 [2024-04-25 20:18:31.127380] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:23:33.362 [2024-04-25 20:18:31.127393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:33.362 request: 00:23:33.362 { 00:23:33.362 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.362 "namespace": { 00:23:33.362 "bdev_name": "Malloc0" 00:23:33.362 }, 00:23:33.362 "method": "nvmf_subsystem_add_ns", 00:23:33.362 "req_id": 1 00:23:33.362 } 00:23:33.362 Got JSON-RPC error response 00:23:33.362 response: 00:23:33.362 { 00:23:33.362 "code": -32602, 00:23:33.362 "message": "Invalid parameters" 00:23:33.362 } 00:23:33.362 20:18:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:33.362 20:18:31 -- target/nmic.sh@29 -- # nmic_status=1 00:23:33.362 20:18:31 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:23:33.362 20:18:31 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:23:33.362 Adding namespace failed - expected result. 00:23:33.362 20:18:31 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:23:33.362 test case2: host connect to nvmf target in multiple paths 00:23:33.362 20:18:31 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:33.362 20:18:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.362 20:18:31 -- common/autotest_common.sh@10 -- # set +x 00:23:33.362 [2024-04-25 20:18:31.135467] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.362 20:18:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.362 20:18:31 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:34.744 20:18:32 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:23:36.122 20:18:33 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:23:36.122 20:18:33 -- common/autotest_common.sh@1177 -- # local i=0 00:23:36.122 20:18:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:36.122 20:18:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:36.122 20:18:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:38.076 20:18:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:38.076 20:18:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:38.076 20:18:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:38.076 20:18:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:38.076 20:18:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:38.076 20:18:35 -- common/autotest_common.sh@1187 -- # return 0 00:23:38.076 20:18:35 -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:38.076 [global] 00:23:38.076 thread=1 00:23:38.076 invalidate=1 00:23:38.076 rw=write 00:23:38.076 time_based=1 00:23:38.076 runtime=1 00:23:38.076 ioengine=libaio 00:23:38.076 direct=1 00:23:38.076 bs=4096 00:23:38.076 iodepth=1 00:23:38.076 norandommap=0 00:23:38.076 numjobs=1 00:23:38.076 00:23:38.347 verify_dump=1 00:23:38.347 verify_backlog=512 00:23:38.347 verify_state_save=0 00:23:38.347 do_verify=1 00:23:38.347 verify=crc32c-intel 00:23:38.347 [job0] 00:23:38.347 filename=/dev/nvme0n1 00:23:38.347 Could not set queue depth (nvme0n1) 00:23:38.607 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:38.607 fio-3.35 00:23:38.607 Starting 1 thread 00:23:39.984 00:23:39.984 job0: (groupid=0, jobs=1): err= 0: pid=1598083: Thu Apr 25 20:18:37 2024 00:23:39.984 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:23:39.984 slat (nsec): min=8487, max=38104, avg=30592.50, stdev=6526.70 00:23:39.984 clat (usec): min=41048, max=42134, avg=41902.53, stdev=209.20 00:23:39.984 lat (usec): min=41080, max=42167, avg=41933.12, stdev=208.79 00:23:39.984 clat percentiles (usec): 00:23:39.984 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:23:39.984 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:23:39.984 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:39.984 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:39.984 | 99.99th=[42206] 00:23:39.984 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:23:39.984 slat (nsec): min=5225, max=48050, avg=7417.87, stdev=2024.47 00:23:39.984 clat (usec): min=139, max=498, avg=195.03, stdev=16.27 00:23:39.984 lat (usec): min=146, max=546, avg=202.45, stdev=17.77 00:23:39.984 clat percentiles (usec): 00:23:39.984 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:23:39.984 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 194], 60.00th=[ 196], 00:23:39.984 | 70.00th=[ 198], 80.00th=[ 198], 90.00th=[ 202], 95.00th=[ 206], 00:23:39.984 | 99.00th=[ 215], 99.50th=[ 277], 99.90th=[ 498], 99.95th=[ 498], 00:23:39.984 | 99.99th=[ 498] 00:23:39.984 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:23:39.984 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:39.984 lat (usec) : 250=95.32%, 500=0.56% 00:23:39.984 lat (msec) : 50=4.12% 00:23:39.984 cpu : usr=0.49%, sys=0.29%, ctx=534, majf=0, minf=1 00:23:39.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.984 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:39.984 00:23:39.984 Run status group 0 (all jobs): 00:23:39.984 READ: bw=85.7KiB/s (87.7kB/s), 85.7KiB/s-85.7KiB/s (87.7kB/s-87.7kB/s), io=88.0KiB (90.1kB), run=1027-1027msec 00:23:39.984 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:23:39.984 00:23:39.984 Disk stats (read/write): 00:23:39.984 nvme0n1: ios=68/512, merge=0/0, ticks=784/96, in_queue=880, util=91.78% 00:23:39.984 20:18:37 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:39.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:39.984 20:18:37 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:39.984 20:18:37 -- common/autotest_common.sh@1198 -- # local i=0 00:23:39.984 20:18:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:39.984 20:18:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:39.984 20:18:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:39.984 20:18:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:39.984 20:18:37 -- common/autotest_common.sh@1210 -- # return 0 00:23:39.984 20:18:37 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:39.984 20:18:37 -- target/nmic.sh@53 -- # nvmftestfini 00:23:39.984 20:18:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:39.984 20:18:37 -- nvmf/common.sh@116 -- # sync 00:23:39.984 20:18:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:39.984 20:18:37 -- nvmf/common.sh@119 -- # set +e 00:23:39.984 20:18:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:39.984 20:18:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:40.243 rmmod nvme_tcp 00:23:40.243 rmmod nvme_fabrics 00:23:40.243 rmmod nvme_keyring 00:23:40.243 20:18:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:40.243 20:18:37 -- nvmf/common.sh@123 -- # set -e 00:23:40.243 20:18:37 -- nvmf/common.sh@124 -- # return 0 00:23:40.243 20:18:37 -- nvmf/common.sh@477 -- # '[' -n 1596691 ']' 00:23:40.243 20:18:37 -- nvmf/common.sh@478 -- # killprocess 1596691 00:23:40.243 20:18:37 -- common/autotest_common.sh@926 -- # '[' -z 1596691 ']' 00:23:40.243 20:18:37 -- common/autotest_common.sh@930 -- # kill -0 1596691 00:23:40.243 20:18:37 -- common/autotest_common.sh@931 -- # uname 00:23:40.243 20:18:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:40.243 20:18:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1596691 00:23:40.243 20:18:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:40.243 20:18:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:40.243 20:18:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1596691' 00:23:40.243 killing process with pid 1596691 00:23:40.243 20:18:38 -- common/autotest_common.sh@945 -- # kill 1596691 00:23:40.243 20:18:38 -- common/autotest_common.sh@950 -- # wait 1596691 00:23:40.818 20:18:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:40.818 20:18:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:40.818 20:18:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:40.818 20:18:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.818 20:18:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:40.818 20:18:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.818 20:18:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.818 20:18:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.722 20:18:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:42.722 00:23:42.722 real 0m15.923s 00:23:42.722 user 0m48.827s 00:23:42.722 sys 0m4.693s 00:23:42.722 20:18:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:42.722 20:18:40 -- common/autotest_common.sh@10 -- # set +x 00:23:42.722 ************************************ 00:23:42.722 END TEST nvmf_nmic 00:23:42.722 ************************************ 00:23:42.722 20:18:40 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:42.722 20:18:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:42.722 20:18:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:42.722 20:18:40 -- common/autotest_common.sh@10 -- # set +x 00:23:42.722 ************************************ 00:23:42.722 START TEST nvmf_fio_target 00:23:42.722 ************************************ 00:23:42.722 20:18:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:42.981 * Looking for test storage... 00:23:42.981 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:42.981 20:18:40 -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.981 20:18:40 -- nvmf/common.sh@7 -- # uname -s 00:23:42.981 20:18:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.981 20:18:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.981 20:18:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.981 20:18:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.981 20:18:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.981 20:18:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.981 20:18:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.981 20:18:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.981 20:18:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.981 20:18:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.981 20:18:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:42.981 20:18:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:42.981 20:18:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.981 20:18:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.981 20:18:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:42.981 20:18:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:42.981 20:18:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.981 20:18:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.981 20:18:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.981 20:18:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.981 20:18:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.981 20:18:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.981 20:18:40 -- paths/export.sh@5 -- # export PATH 00:23:42.981 20:18:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.981 20:18:40 -- nvmf/common.sh@46 -- # : 0 00:23:42.981 20:18:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:42.981 20:18:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:42.981 20:18:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:42.981 20:18:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.981 20:18:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.981 20:18:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:42.981 20:18:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:42.981 20:18:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:42.981 20:18:40 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.981 20:18:40 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:42.981 20:18:40 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:23:42.981 20:18:40 -- target/fio.sh@16 -- # nvmftestinit 00:23:42.981 20:18:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:42.981 20:18:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.981 20:18:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:42.981 20:18:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:42.981 20:18:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:42.981 20:18:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.981 20:18:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.981 20:18:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.981 20:18:40 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:42.981 20:18:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:42.981 20:18:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:42.981 20:18:40 -- common/autotest_common.sh@10 -- # set +x 00:23:48.258 20:18:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:48.258 20:18:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:48.258 20:18:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:48.258 20:18:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:48.258 20:18:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:48.258 20:18:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:48.258 20:18:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:48.258 20:18:45 -- nvmf/common.sh@294 -- # net_devs=() 00:23:48.258 20:18:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:48.258 20:18:45 -- nvmf/common.sh@295 -- # e810=() 00:23:48.258 20:18:45 -- nvmf/common.sh@295 -- # local -ga e810 00:23:48.258 20:18:45 -- nvmf/common.sh@296 -- # x722=() 00:23:48.258 20:18:45 -- nvmf/common.sh@296 -- # local -ga x722 00:23:48.258 20:18:45 -- nvmf/common.sh@297 -- # mlx=() 00:23:48.258 20:18:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:48.258 20:18:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.258 20:18:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:48.258 20:18:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:48.258 20:18:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:48.258 20:18:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:48.258 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:48.258 20:18:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:48.258 20:18:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:48.258 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:48.258 20:18:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:48.258 20:18:45 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:48.258 20:18:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.258 20:18:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:48.258 20:18:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.258 20:18:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:48.258 Found net devices under 0000:27:00.0: cvl_0_0 00:23:48.258 20:18:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.258 20:18:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:48.258 20:18:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.258 20:18:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:48.258 20:18:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.258 20:18:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:48.258 Found net devices under 0000:27:00.1: cvl_0_1 00:23:48.258 20:18:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.258 20:18:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:48.258 20:18:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:48.258 20:18:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:48.258 20:18:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.258 20:18:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.258 20:18:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.258 20:18:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:48.258 20:18:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.258 20:18:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.258 20:18:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:48.258 20:18:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.258 20:18:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.258 20:18:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:48.258 20:18:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:48.258 20:18:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.258 20:18:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.258 20:18:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.258 20:18:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.258 20:18:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:48.258 20:18:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.258 20:18:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.258 20:18:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.258 20:18:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:48.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:23:48.258 00:23:48.258 --- 10.0.0.2 ping statistics --- 00:23:48.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.258 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:23:48.258 20:18:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:23:48.258 00:23:48.258 --- 10.0.0.1 ping statistics --- 00:23:48.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.258 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:23:48.258 20:18:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.258 20:18:45 -- nvmf/common.sh@410 -- # return 0 00:23:48.258 20:18:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:48.258 20:18:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.258 20:18:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:48.258 20:18:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.258 20:18:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:48.258 20:18:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:48.258 20:18:45 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:23:48.258 20:18:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:48.258 20:18:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:48.258 20:18:45 -- common/autotest_common.sh@10 -- # set +x 00:23:48.258 20:18:45 -- nvmf/common.sh@469 -- # nvmfpid=1602276 00:23:48.258 20:18:45 -- nvmf/common.sh@470 -- # waitforlisten 1602276 00:23:48.258 20:18:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:48.259 20:18:45 -- common/autotest_common.sh@819 -- # '[' -z 1602276 ']' 00:23:48.259 20:18:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.259 20:18:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:48.259 20:18:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.259 20:18:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:48.259 20:18:45 -- common/autotest_common.sh@10 -- # set +x 00:23:48.259 [2024-04-25 20:18:46.030012] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:48.259 [2024-04-25 20:18:46.030080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.259 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.259 [2024-04-25 20:18:46.120223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.516 [2024-04-25 20:18:46.213489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:48.516 [2024-04-25 20:18:46.213658] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.516 [2024-04-25 20:18:46.213671] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.516 [2024-04-25 20:18:46.213680] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.516 [2024-04-25 20:18:46.213755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.516 [2024-04-25 20:18:46.213857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.516 [2024-04-25 20:18:46.213958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.516 [2024-04-25 20:18:46.213968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.083 20:18:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:49.083 20:18:46 -- common/autotest_common.sh@852 -- # return 0 00:23:49.083 20:18:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:49.083 20:18:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:49.083 20:18:46 -- common/autotest_common.sh@10 -- # set +x 00:23:49.084 20:18:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.084 20:18:46 -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:49.084 [2024-04-25 20:18:46.892127] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.084 20:18:46 -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:49.341 20:18:47 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:23:49.341 20:18:47 -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:49.601 20:18:47 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:23:49.601 20:18:47 -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:49.601 20:18:47 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:23:49.601 20:18:47 -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:49.860 20:18:47 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:23:49.860 20:18:47 -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:23:50.117 20:18:47 -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:50.117 20:18:47 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:23:50.117 20:18:47 -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:50.375 20:18:48 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:23:50.375 20:18:48 -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:50.375 20:18:48 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:23:50.375 20:18:48 -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:23:50.632 20:18:48 -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:50.892 20:18:48 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:50.892 20:18:48 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:50.892 20:18:48 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:50.892 20:18:48 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:51.151 20:18:48 -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.151 [2024-04-25 20:18:48.991956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.151 20:18:49 -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:23:51.411 20:18:49 -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:23:51.411 20:18:49 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:52.788 20:18:50 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:23:52.788 20:18:50 -- common/autotest_common.sh@1177 -- # local i=0 00:23:52.788 20:18:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:52.788 20:18:50 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:23:52.788 20:18:50 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:23:52.788 20:18:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:55.321 20:18:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:55.321 20:18:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:55.321 20:18:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:55.321 20:18:52 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:23:55.321 20:18:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:55.321 20:18:52 -- common/autotest_common.sh@1187 -- # return 0 00:23:55.321 20:18:52 -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:55.321 [global] 00:23:55.321 thread=1 00:23:55.321 invalidate=1 00:23:55.321 rw=write 00:23:55.321 time_based=1 00:23:55.321 runtime=1 00:23:55.321 ioengine=libaio 00:23:55.321 direct=1 00:23:55.321 bs=4096 00:23:55.321 iodepth=1 00:23:55.321 norandommap=0 00:23:55.321 numjobs=1 00:23:55.321 00:23:55.321 verify_dump=1 00:23:55.321 verify_backlog=512 00:23:55.321 verify_state_save=0 00:23:55.321 do_verify=1 00:23:55.321 verify=crc32c-intel 00:23:55.321 [job0] 00:23:55.321 filename=/dev/nvme0n1 00:23:55.321 [job1] 00:23:55.321 filename=/dev/nvme0n2 00:23:55.321 [job2] 00:23:55.321 filename=/dev/nvme0n3 00:23:55.321 [job3] 00:23:55.321 filename=/dev/nvme0n4 00:23:55.321 Could not set queue depth (nvme0n1) 00:23:55.321 Could not set queue depth (nvme0n2) 00:23:55.321 Could not set queue depth (nvme0n3) 00:23:55.321 Could not set queue depth (nvme0n4) 00:23:55.321 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:55.321 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:55.321 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:55.321 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:55.321 fio-3.35 00:23:55.321 Starting 4 threads 00:23:56.690 00:23:56.690 job0: (groupid=0, jobs=1): err= 0: pid=1603854: Thu Apr 25 20:18:54 2024 00:23:56.690 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:23:56.690 slat (nsec): min=3577, max=17369, avg=6311.44, stdev=1145.49 00:23:56.690 clat (usec): min=172, max=882, avg=276.50, stdev=56.74 00:23:56.690 lat (usec): min=178, max=889, avg=282.82, stdev=57.15 00:23:56.690 clat percentiles (usec): 00:23:56.690 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 229], 00:23:56.690 | 30.00th=[ 243], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:23:56.690 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 355], 00:23:56.690 | 99.00th=[ 469], 99.50th=[ 490], 99.90th=[ 502], 99.95th=[ 570], 00:23:56.690 | 99.99th=[ 881] 00:23:56.690 write: IOPS=2500, BW=9.77MiB/s (10.2MB/s)(9.78MiB/1001msec); 0 zone resets 00:23:56.690 slat (nsec): min=4752, max=58820, avg=6714.29, stdev=1792.24 00:23:56.690 clat (usec): min=109, max=494, avg=157.47, stdev=27.12 00:23:56.690 lat (usec): min=114, max=553, avg=164.19, stdev=27.80 00:23:56.690 clat percentiles (usec): 00:23:56.690 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 137], 00:23:56.690 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:23:56.690 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 202], 00:23:56.690 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 420], 99.95th=[ 486], 00:23:56.690 | 99.99th=[ 494] 00:23:56.690 bw ( KiB/s): min=10520, max=10520, per=48.59%, avg=10520.00, stdev= 0.00, samples=1 00:23:56.690 iops : min= 2630, max= 2630, avg=2630.00, stdev= 0.00, samples=1 00:23:56.690 lat (usec) : 250=70.47%, 500=29.49%, 750=0.02%, 1000=0.02% 00:23:56.690 cpu : usr=1.70%, sys=3.10%, ctx=4552, majf=0, minf=1 00:23:56.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:56.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.690 issued rwts: total=2048,2503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:56.690 job1: (groupid=0, jobs=1): err= 0: pid=1603866: Thu Apr 25 20:18:54 2024 00:23:56.690 read: IOPS=95, BW=381KiB/s (390kB/s)(392KiB/1030msec) 00:23:56.690 slat (nsec): min=3998, max=30511, avg=7150.81, stdev=4680.89 00:23:56.690 clat (usec): min=207, max=42029, avg=9219.73, stdev=17196.35 00:23:56.690 lat (usec): min=212, max=42039, avg=9226.88, stdev=17197.78 00:23:56.690 clat percentiles (usec): 00:23:56.690 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 233], 20.00th=[ 247], 00:23:56.690 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 289], 00:23:56.690 | 70.00th=[ 412], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:23:56.690 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:56.690 | 99.99th=[42206] 00:23:56.691 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:23:56.691 slat (nsec): min=4106, max=45920, avg=6727.23, stdev=2572.73 00:23:56.691 clat (usec): min=151, max=638, avg=236.09, stdev=34.86 00:23:56.691 lat (usec): min=158, max=684, avg=242.82, stdev=35.88 00:23:56.691 clat percentiles (usec): 00:23:56.691 | 1.00th=[ 159], 5.00th=[ 186], 10.00th=[ 198], 20.00th=[ 212], 00:23:56.691 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:23:56.691 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:23:56.691 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 635], 99.95th=[ 635], 00:23:56.691 | 99.99th=[ 635] 00:23:56.691 bw ( KiB/s): min= 4096, max= 4096, per=18.92%, avg=4096.00, stdev= 0.00, samples=1 00:23:56.691 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:56.691 lat (usec) : 250=62.13%, 500=34.26%, 750=0.16% 00:23:56.691 lat (msec) : 50=3.44% 00:23:56.691 cpu : usr=0.29%, sys=0.19%, ctx=612, majf=0, minf=1 00:23:56.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:56.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.691 issued rwts: total=98,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:56.691 job2: (groupid=0, jobs=1): err= 0: pid=1603887: Thu Apr 25 20:18:54 2024 00:23:56.691 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1010msec) 00:23:56.691 slat (nsec): min=9174, max=33795, avg=31924.24, stdev=5218.02 00:23:56.691 clat (usec): min=40965, max=42061, avg=41888.12, stdev=221.28 00:23:56.691 lat (usec): min=40998, max=42094, avg=41920.04, stdev=221.84 00:23:56.691 clat percentiles (usec): 00:23:56.691 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:23:56.691 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:23:56.691 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:56.691 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:56.691 | 99.99th=[42206] 00:23:56.691 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:23:56.691 slat (nsec): min=4897, max=40216, avg=8691.32, stdev=2451.77 00:23:56.691 clat (usec): min=164, max=756, avg=241.40, stdev=46.23 00:23:56.691 lat (usec): min=170, max=764, avg=250.10, stdev=46.84 00:23:56.691 clat percentiles (usec): 00:23:56.691 | 1.00th=[ 169], 5.00th=[ 186], 10.00th=[ 204], 20.00th=[ 215], 00:23:56.691 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:23:56.691 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 306], 00:23:56.691 | 99.00th=[ 379], 99.50th=[ 416], 99.90th=[ 758], 99.95th=[ 758], 00:23:56.691 | 99.99th=[ 758] 00:23:56.691 bw ( KiB/s): min= 4096, max= 4096, per=18.92%, avg=4096.00, stdev= 0.00, samples=1 00:23:56.691 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:56.691 lat (usec) : 250=64.92%, 500=30.77%, 750=0.19%, 1000=0.19% 00:23:56.691 lat (msec) : 50=3.94% 00:23:56.691 cpu : usr=0.30%, sys=0.59%, ctx=536, majf=0, minf=1 00:23:56.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:56.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.691 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:56.691 job3: (groupid=0, jobs=1): err= 0: pid=1603894: Thu Apr 25 20:18:54 2024 00:23:56.691 read: IOPS=1933, BW=7732KiB/s (7918kB/s)(7740KiB/1001msec) 00:23:56.691 slat (nsec): min=3031, max=33170, avg=6017.70, stdev=1616.44 00:23:56.691 clat (usec): min=227, max=754, avg=316.12, stdev=60.20 00:23:56.691 lat (usec): min=233, max=759, avg=322.14, stdev=60.30 00:23:56.691 clat percentiles (usec): 00:23:56.691 | 1.00th=[ 251], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:23:56.691 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:23:56.691 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 424], 95.00th=[ 465], 00:23:56.691 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 603], 99.95th=[ 758], 00:23:56.691 | 99.99th=[ 758] 00:23:56.691 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:23:56.691 slat (nsec): min=4667, max=51599, avg=6820.66, stdev=1952.55 00:23:56.691 clat (usec): min=123, max=546, avg=172.93, stdev=28.31 00:23:56.691 lat (usec): min=129, max=598, avg=179.75, stdev=29.15 00:23:56.691 clat percentiles (usec): 00:23:56.691 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:23:56.691 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:23:56.691 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 206], 95.00th=[ 227], 00:23:56.691 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 367], 99.95th=[ 367], 00:23:56.691 | 99.99th=[ 545] 00:23:56.691 bw ( KiB/s): min= 8192, max= 8192, per=37.84%, avg=8192.00, stdev= 0.00, samples=1 00:23:56.691 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:56.691 lat (usec) : 250=50.34%, 500=49.16%, 750=0.48%, 1000=0.03% 00:23:56.691 cpu : usr=2.00%, sys=3.60%, ctx=3984, majf=0, minf=1 00:23:56.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:56.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.691 issued rwts: total=1935,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:56.691 00:23:56.691 Run status group 0 (all jobs): 00:23:56.691 READ: bw=15.6MiB/s (16.3MB/s), 83.2KiB/s-8184KiB/s (85.2kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1030msec 00:23:56.691 WRITE: bw=21.1MiB/s (22.2MB/s), 1988KiB/s-9.77MiB/s (2036kB/s-10.2MB/s), io=21.8MiB (22.8MB), run=1001-1030msec 00:23:56.691 00:23:56.691 Disk stats (read/write): 00:23:56.691 nvme0n1: ios=1791/2048, merge=0/0, ticks=1112/322, in_queue=1434, util=96.89% 00:23:56.691 nvme0n2: ios=123/512, merge=0/0, ticks=1656/123, in_queue=1779, util=97.24% 00:23:56.691 nvme0n3: ios=39/512, merge=0/0, ticks=1592/115, in_queue=1707, util=97.04% 00:23:56.691 nvme0n4: ios=1561/1796, merge=0/0, ticks=1398/306, in_queue=1704, util=97.32% 00:23:56.691 20:18:54 -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:23:56.691 [global] 00:23:56.691 thread=1 00:23:56.691 invalidate=1 00:23:56.691 rw=randwrite 00:23:56.691 time_based=1 00:23:56.691 runtime=1 00:23:56.691 ioengine=libaio 00:23:56.691 direct=1 00:23:56.691 bs=4096 00:23:56.691 iodepth=1 00:23:56.691 norandommap=0 00:23:56.691 numjobs=1 00:23:56.691 00:23:56.691 verify_dump=1 00:23:56.691 verify_backlog=512 00:23:56.691 verify_state_save=0 00:23:56.691 do_verify=1 00:23:56.691 verify=crc32c-intel 00:23:56.691 [job0] 00:23:56.691 filename=/dev/nvme0n1 00:23:56.691 [job1] 00:23:56.691 filename=/dev/nvme0n2 00:23:56.691 [job2] 00:23:56.691 filename=/dev/nvme0n3 00:23:56.691 [job3] 00:23:56.691 filename=/dev/nvme0n4 00:23:56.691 Could not set queue depth (nvme0n1) 00:23:56.691 Could not set queue depth (nvme0n2) 00:23:56.691 Could not set queue depth (nvme0n3) 00:23:56.691 Could not set queue depth (nvme0n4) 00:23:56.950 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:56.950 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:56.950 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:56.950 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:56.950 fio-3.35 00:23:56.950 Starting 4 threads 00:23:58.324 00:23:58.324 job0: (groupid=0, jobs=1): err= 0: pid=1604400: Thu Apr 25 20:18:56 2024 00:23:58.324 read: IOPS=19, BW=79.6KiB/s (81.5kB/s)(80.0KiB/1005msec) 00:23:58.324 slat (nsec): min=6881, max=43734, avg=34196.20, stdev=7703.38 00:23:58.324 clat (usec): min=41861, max=42500, avg=41986.29, stdev=132.47 00:23:58.324 lat (usec): min=41905, max=42507, avg=42020.49, stdev=126.47 00:23:58.324 clat percentiles (usec): 00:23:58.324 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:23:58.324 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:23:58.324 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:58.324 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:23:58.324 | 99.99th=[42730] 00:23:58.324 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:23:58.324 slat (nsec): min=4618, max=43946, avg=16873.44, stdev=9394.24 00:23:58.324 clat (usec): min=154, max=712, avg=299.13, stdev=87.49 00:23:58.324 lat (usec): min=163, max=756, avg=316.00, stdev=92.00 00:23:58.324 clat percentiles (usec): 00:23:58.324 | 1.00th=[ 159], 5.00th=[ 174], 10.00th=[ 188], 20.00th=[ 217], 00:23:58.324 | 30.00th=[ 245], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 318], 00:23:58.324 | 70.00th=[ 347], 80.00th=[ 379], 90.00th=[ 424], 95.00th=[ 453], 00:23:58.324 | 99.00th=[ 494], 99.50th=[ 545], 99.90th=[ 717], 99.95th=[ 717], 00:23:58.324 | 99.99th=[ 717] 00:23:58.324 bw ( KiB/s): min= 4087, max= 4087, per=34.19%, avg=4087.00, stdev= 0.00, samples=1 00:23:58.324 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:23:58.324 lat (usec) : 250=31.77%, 500=63.72%, 750=0.75% 00:23:58.324 lat (msec) : 50=3.76% 00:23:58.324 cpu : usr=0.60%, sys=1.20%, ctx=534, majf=0, minf=1 00:23:58.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.324 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.324 job1: (groupid=0, jobs=1): err= 0: pid=1604413: Thu Apr 25 20:18:56 2024 00:23:58.324 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:23:58.324 slat (nsec): min=7093, max=41760, avg=32105.62, stdev=7224.16 00:23:58.324 clat (usec): min=40899, max=42469, avg=41786.69, stdev=428.26 00:23:58.324 lat (usec): min=40929, max=42476, avg=41818.79, stdev=427.79 00:23:58.324 clat percentiles (usec): 00:23:58.324 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:23:58.324 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:23:58.324 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:58.324 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:23:58.324 | 99.99th=[42730] 00:23:58.324 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:23:58.324 slat (nsec): min=4771, max=46745, avg=13643.82, stdev=8203.09 00:23:58.324 clat (usec): min=130, max=817, avg=229.14, stdev=62.94 00:23:58.324 lat (usec): min=135, max=864, avg=242.78, stdev=67.55 00:23:58.324 clat percentiles (usec): 00:23:58.324 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 163], 00:23:58.324 | 30.00th=[ 176], 40.00th=[ 208], 50.00th=[ 245], 60.00th=[ 249], 00:23:58.324 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 322], 00:23:58.324 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 816], 99.95th=[ 816], 00:23:58.324 | 99.99th=[ 816] 00:23:58.324 bw ( KiB/s): min= 4096, max= 4096, per=34.27%, avg=4096.00, stdev= 0.00, samples=1 00:23:58.324 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:58.324 lat (usec) : 250=61.16%, 500=34.71%, 1000=0.19% 00:23:58.324 lat (msec) : 50=3.94% 00:23:58.324 cpu : usr=0.30%, sys=0.70%, ctx=533, majf=0, minf=1 00:23:58.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.324 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.324 job2: (groupid=0, jobs=1): err= 0: pid=1604437: Thu Apr 25 20:18:56 2024 00:23:58.324 read: IOPS=1005, BW=4023KiB/s (4120kB/s)(4136KiB/1028msec) 00:23:58.324 slat (nsec): min=3973, max=45788, avg=10361.67, stdev=7679.74 00:23:58.324 clat (usec): min=201, max=42081, avg=675.36, stdev=4086.42 00:23:58.324 lat (usec): min=207, max=42087, avg=685.72, stdev=4086.07 00:23:58.324 clat percentiles (usec): 00:23:58.324 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:23:58.324 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 265], 00:23:58.324 | 70.00th=[ 277], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 379], 00:23:58.324 | 99.00th=[ 498], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:58.324 | 99.99th=[42206] 00:23:58.324 write: IOPS=1494, BW=5977KiB/s (6120kB/s)(6144KiB/1028msec); 0 zone resets 00:23:58.324 slat (nsec): min=4339, max=79391, avg=11832.36, stdev=8609.31 00:23:58.324 clat (usec): min=114, max=615, avg=190.18, stdev=57.14 00:23:58.324 lat (usec): min=119, max=695, avg=202.01, stdev=63.84 00:23:58.324 clat percentiles (usec): 00:23:58.324 | 1.00th=[ 127], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:23:58.324 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 172], 00:23:58.324 | 70.00th=[ 186], 80.00th=[ 258], 90.00th=[ 285], 95.00th=[ 302], 00:23:58.324 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 578], 99.95th=[ 619], 00:23:58.324 | 99.99th=[ 619] 00:23:58.324 bw ( KiB/s): min= 4096, max= 8175, per=51.32%, avg=6135.50, stdev=2884.29, samples=2 00:23:58.324 iops : min= 1024, max= 2043, avg=1533.50, stdev=720.54, samples=2 00:23:58.324 lat (usec) : 250=65.33%, 500=34.20%, 750=0.08% 00:23:58.324 lat (msec) : 50=0.39% 00:23:58.324 cpu : usr=1.07%, sys=3.12%, ctx=2571, majf=0, minf=1 00:23:58.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.324 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.324 job3: (groupid=0, jobs=1): err= 0: pid=1604446: Thu Apr 25 20:18:56 2024 00:23:58.324 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:23:58.324 slat (nsec): min=21111, max=45240, avg=35288.27, stdev=6621.24 00:23:58.325 clat (usec): min=565, max=42075, avg=38152.97, stdev=12115.44 00:23:58.325 lat (usec): min=588, max=42119, avg=38188.26, stdev=12115.96 00:23:58.325 clat percentiles (usec): 00:23:58.325 | 1.00th=[ 570], 5.00th=[ 889], 10.00th=[41157], 20.00th=[41681], 00:23:58.325 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:23:58.325 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:58.325 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:58.325 | 99.99th=[42206] 00:23:58.325 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:23:58.325 slat (nsec): min=4545, max=48345, avg=18061.14, stdev=9229.33 00:23:58.325 clat (usec): min=143, max=626, avg=298.08, stdev=90.23 00:23:58.325 lat (usec): min=148, max=650, avg=316.15, stdev=94.17 00:23:58.325 clat percentiles (usec): 00:23:58.325 | 1.00th=[ 155], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 208], 00:23:58.325 | 30.00th=[ 245], 40.00th=[ 269], 50.00th=[ 289], 60.00th=[ 310], 00:23:58.325 | 70.00th=[ 343], 80.00th=[ 379], 90.00th=[ 433], 95.00th=[ 461], 00:23:58.325 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 627], 99.95th=[ 627], 00:23:58.325 | 99.99th=[ 627] 00:23:58.325 bw ( KiB/s): min= 4087, max= 4087, per=34.19%, avg=4087.00, stdev= 0.00, samples=1 00:23:58.325 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:23:58.325 lat (usec) : 250=30.15%, 500=64.04%, 750=1.87%, 1000=0.19% 00:23:58.325 lat (msec) : 50=3.75% 00:23:58.325 cpu : usr=0.90%, sys=1.00%, ctx=534, majf=0, minf=1 00:23:58.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.325 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.325 00:23:58.325 Run status group 0 (all jobs): 00:23:58.325 READ: bw=4268KiB/s (4371kB/s), 79.6KiB/s-4023KiB/s (81.5kB/s-4120kB/s), io=4388KiB (4493kB), run=1005-1028msec 00:23:58.325 WRITE: bw=11.7MiB/s (12.2MB/s), 2038KiB/s-5977KiB/s (2087kB/s-6120kB/s), io=12.0MiB (12.6MB), run=1005-1028msec 00:23:58.325 00:23:58.325 Disk stats (read/write): 00:23:58.325 nvme0n1: ios=67/512, merge=0/0, ticks=1173/128, in_queue=1301, util=96.99% 00:23:58.325 nvme0n2: ios=37/512, merge=0/0, ticks=730/116, in_queue=846, util=85.93% 00:23:58.325 nvme0n3: ios=1050/1536, merge=0/0, ticks=670/286, in_queue=956, util=89.67% 00:23:58.325 nvme0n4: ios=18/512, merge=0/0, ticks=672/121, in_queue=793, util=89.46% 00:23:58.325 20:18:56 -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:23:58.325 [global] 00:23:58.325 thread=1 00:23:58.325 invalidate=1 00:23:58.325 rw=write 00:23:58.325 time_based=1 00:23:58.325 runtime=1 00:23:58.325 ioengine=libaio 00:23:58.325 direct=1 00:23:58.325 bs=4096 00:23:58.325 iodepth=128 00:23:58.325 norandommap=0 00:23:58.325 numjobs=1 00:23:58.325 00:23:58.325 verify_dump=1 00:23:58.325 verify_backlog=512 00:23:58.325 verify_state_save=0 00:23:58.325 do_verify=1 00:23:58.325 verify=crc32c-intel 00:23:58.325 [job0] 00:23:58.325 filename=/dev/nvme0n1 00:23:58.325 [job1] 00:23:58.325 filename=/dev/nvme0n2 00:23:58.325 [job2] 00:23:58.325 filename=/dev/nvme0n3 00:23:58.325 [job3] 00:23:58.325 filename=/dev/nvme0n4 00:23:58.325 Could not set queue depth (nvme0n1) 00:23:58.325 Could not set queue depth (nvme0n2) 00:23:58.325 Could not set queue depth (nvme0n3) 00:23:58.325 Could not set queue depth (nvme0n4) 00:23:58.585 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:58.585 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:58.585 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:58.585 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:58.585 fio-3.35 00:23:58.585 Starting 4 threads 00:24:00.011 00:24:00.011 job0: (groupid=0, jobs=1): err= 0: pid=1604950: Thu Apr 25 20:18:57 2024 00:24:00.011 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:24:00.011 slat (nsec): min=1059, max=12518k, avg=134687.88, stdev=817748.78 00:24:00.011 clat (usec): min=4404, max=60235, avg=14569.24, stdev=7991.51 00:24:00.011 lat (usec): min=4409, max=60243, avg=14703.93, stdev=8063.81 00:24:00.011 clat percentiles (usec): 00:24:00.011 | 1.00th=[ 4490], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[10028], 00:24:00.011 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11731], 60.00th=[12649], 00:24:00.011 | 70.00th=[14222], 80.00th=[16319], 90.00th=[25560], 95.00th=[33162], 00:24:00.011 | 99.00th=[47449], 99.50th=[53740], 99.90th=[60031], 99.95th=[60031], 00:24:00.011 | 99.99th=[60031] 00:24:00.011 write: IOPS=3528, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec); 0 zone resets 00:24:00.011 slat (nsec): min=1812, max=6186.6k, avg=159798.47, stdev=659430.19 00:24:00.011 clat (usec): min=3423, max=60208, avg=23317.81, stdev=11830.38 00:24:00.011 lat (usec): min=3428, max=60211, avg=23477.61, stdev=11904.14 00:24:00.011 clat percentiles (usec): 00:24:00.011 | 1.00th=[ 4817], 5.00th=[ 7439], 10.00th=[ 9634], 20.00th=[12649], 00:24:00.011 | 30.00th=[14615], 40.00th=[20317], 50.00th=[21365], 60.00th=[22414], 00:24:00.011 | 70.00th=[28181], 80.00th=[34866], 90.00th=[41681], 95.00th=[46924], 00:24:00.011 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53740], 99.95th=[60031], 00:24:00.011 | 99.99th=[60031] 00:24:00.011 bw ( KiB/s): min=12552, max=15024, per=18.27%, avg=13788.00, stdev=1747.97, samples=2 00:24:00.011 iops : min= 3138, max= 3756, avg=3447.00, stdev=436.99, samples=2 00:24:00.011 lat (msec) : 4=0.27%, 10=15.54%, 20=44.52%, 50=38.40%, 100=1.26% 00:24:00.011 cpu : usr=2.27%, sys=2.67%, ctx=480, majf=0, minf=1 00:24:00.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:00.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:00.011 issued rwts: total=3072,3574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:00.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:00.011 job1: (groupid=0, jobs=1): err= 0: pid=1604951: Thu Apr 25 20:18:57 2024 00:24:00.011 read: IOPS=5638, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1003msec) 00:24:00.011 slat (nsec): min=807, max=3709.2k, avg=80149.64, stdev=431102.79 00:24:00.011 clat (usec): min=695, max=14201, avg=9995.71, stdev=1284.33 00:24:00.011 lat (usec): min=3639, max=14209, avg=10075.86, stdev=1309.25 00:24:00.011 clat percentiles (usec): 00:24:00.011 | 1.00th=[ 7373], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 9110], 00:24:00.011 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:24:00.011 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11469], 95.00th=[12125], 00:24:00.011 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13960], 99.95th=[14091], 00:24:00.011 | 99.99th=[14222] 00:24:00.011 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:24:00.011 slat (nsec): min=1484, max=6745.7k, avg=86752.08, stdev=383737.29 00:24:00.011 clat (usec): min=3702, max=35526, avg=11423.54, stdev=3750.35 00:24:00.011 lat (usec): min=3704, max=35537, avg=11510.30, stdev=3770.66 00:24:00.011 clat percentiles (usec): 00:24:00.011 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10028], 00:24:00.011 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:24:00.011 | 70.00th=[11076], 80.00th=[11469], 90.00th=[13304], 95.00th=[18482], 00:24:00.011 | 99.00th=[29754], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:24:00.011 | 99.99th=[35390] 00:24:00.011 bw ( KiB/s): min=22808, max=25504, per=32.00%, avg=24156.00, stdev=1906.36, samples=2 00:24:00.011 iops : min= 5702, max= 6376, avg=6039.00, stdev=476.59, samples=2 00:24:00.011 lat (usec) : 750=0.01% 00:24:00.011 lat (msec) : 4=0.36%, 10=33.71%, 20=63.36%, 50=2.57% 00:24:00.011 cpu : usr=1.90%, sys=4.19%, ctx=787, majf=0, minf=1 00:24:00.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:00.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:00.011 issued rwts: total=5655,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:00.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:00.011 job2: (groupid=0, jobs=1): err= 0: pid=1604952: Thu Apr 25 20:18:57 2024 00:24:00.011 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1010msec) 00:24:00.011 slat (nsec): min=922, max=15885k, avg=111163.34, stdev=713970.59 00:24:00.011 clat (usec): min=4045, max=45219, avg=13887.51, stdev=5913.10 00:24:00.011 lat (usec): min=6409, max=45225, avg=13998.67, stdev=5969.34 00:24:00.011 clat percentiles (usec): 00:24:00.011 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10159], 00:24:00.011 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12911], 60.00th=[13566], 00:24:00.011 | 70.00th=[13829], 80.00th=[14484], 90.00th=[20055], 95.00th=[27657], 00:24:00.011 | 99.00th=[36963], 99.50th=[39060], 99.90th=[45351], 99.95th=[45351], 00:24:00.011 | 99.99th=[45351] 00:24:00.011 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:24:00.011 slat (nsec): min=1583, max=29735k, avg=143751.62, stdev=805340.30 00:24:00.011 clat (usec): min=3832, max=52964, avg=19007.80, stdev=10334.29 00:24:00.011 lat (usec): min=3840, max=52969, avg=19151.55, stdev=10392.97 00:24:00.011 clat percentiles (usec): 00:24:00.011 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[10683], 20.00th=[11338], 00:24:00.011 | 30.00th=[12649], 40.00th=[13173], 50.00th=[14222], 60.00th=[20055], 00:24:00.011 | 70.00th=[21365], 80.00th=[22414], 90.00th=[35390], 95.00th=[44303], 00:24:00.011 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:24:00.011 | 99.99th=[53216] 00:24:00.011 bw ( KiB/s): min=12720, max=19088, per=21.07%, avg=15904.00, stdev=4502.86, samples=2 00:24:00.011 iops : min= 3180, max= 4772, avg=3976.00, stdev=1125.71, samples=2 00:24:00.011 lat (msec) : 4=0.20%, 10=12.75%, 20=60.74%, 50=25.12%, 100=1.20% 00:24:00.011 cpu : usr=1.78%, sys=3.17%, ctx=467, majf=0, minf=1 00:24:00.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:00.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:00.011 issued rwts: total=3591,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:00.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:00.012 job3: (groupid=0, jobs=1): err= 0: pid=1604953: Thu Apr 25 20:18:57 2024 00:24:00.012 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:24:00.012 slat (nsec): min=1012, max=11418k, avg=97958.35, stdev=679412.20 00:24:00.012 clat (usec): min=4271, max=40682, avg=12915.67, stdev=4822.96 00:24:00.012 lat (usec): min=4276, max=40728, avg=13013.63, stdev=4871.40 00:24:00.012 clat percentiles (usec): 00:24:00.012 | 1.00th=[ 5145], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9503], 00:24:00.012 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11600], 60.00th=[12780], 00:24:00.012 | 70.00th=[13960], 80.00th=[14746], 90.00th=[18744], 95.00th=[21627], 00:24:00.012 | 99.00th=[33162], 99.50th=[33162], 99.90th=[33162], 99.95th=[34866], 00:24:00.012 | 99.99th=[40633] 00:24:00.012 write: IOPS=5249, BW=20.5MiB/s (21.5MB/s)(20.7MiB/1010msec); 0 zone resets 00:24:00.012 slat (nsec): min=1695, max=16623k, avg=85297.89, stdev=583688.54 00:24:00.012 clat (usec): min=868, max=40119, avg=11626.98, stdev=4612.33 00:24:00.012 lat (usec): min=916, max=40124, avg=11712.28, stdev=4658.72 00:24:00.012 clat percentiles (usec): 00:24:00.012 | 1.00th=[ 3884], 5.00th=[ 5604], 10.00th=[ 6980], 20.00th=[ 8291], 00:24:00.012 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:24:00.012 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14877], 95.00th=[22938], 00:24:00.012 | 99.00th=[30802], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:24:00.012 | 99.99th=[40109] 00:24:00.012 bw ( KiB/s): min=18032, max=23368, per=27.42%, avg=20700.00, stdev=3773.12, samples=2 00:24:00.012 iops : min= 4508, max= 5842, avg=5175.00, stdev=943.28, samples=2 00:24:00.012 lat (usec) : 1000=0.02% 00:24:00.012 lat (msec) : 4=0.51%, 10=26.66%, 20=66.24%, 50=6.56% 00:24:00.012 cpu : usr=3.57%, sys=4.36%, ctx=485, majf=0, minf=1 00:24:00.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:00.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:00.012 issued rwts: total=5120,5302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:00.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:00.012 00:24:00.012 Run status group 0 (all jobs): 00:24:00.012 READ: bw=67.2MiB/s (70.5MB/s), 11.8MiB/s-22.0MiB/s (12.4MB/s-23.1MB/s), io=68.1MiB (71.4MB), run=1003-1013msec 00:24:00.012 WRITE: bw=73.7MiB/s (77.3MB/s), 13.8MiB/s-23.9MiB/s (14.5MB/s-25.1MB/s), io=74.7MiB (78.3MB), run=1003-1013msec 00:24:00.012 00:24:00.012 Disk stats (read/write): 00:24:00.012 nvme0n1: ios=2583/2871, merge=0/0, ticks=35244/63736, in_queue=98980, util=97.19% 00:24:00.012 nvme0n2: ios=4632/5095, merge=0/0, ticks=14859/16970, in_queue=31829, util=99.59% 00:24:00.012 nvme0n3: ios=3109/3583, merge=0/0, ticks=20822/29898, in_queue=50720, util=95.89% 00:24:00.012 nvme0n4: ios=4153/4255, merge=0/0, ticks=41104/36543, in_queue=77647, util=97.34% 00:24:00.012 20:18:57 -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:24:00.012 [global] 00:24:00.012 thread=1 00:24:00.012 invalidate=1 00:24:00.012 rw=randwrite 00:24:00.012 time_based=1 00:24:00.012 runtime=1 00:24:00.012 ioengine=libaio 00:24:00.012 direct=1 00:24:00.012 bs=4096 00:24:00.012 iodepth=128 00:24:00.012 norandommap=0 00:24:00.012 numjobs=1 00:24:00.012 00:24:00.012 verify_dump=1 00:24:00.012 verify_backlog=512 00:24:00.012 verify_state_save=0 00:24:00.012 do_verify=1 00:24:00.012 verify=crc32c-intel 00:24:00.012 [job0] 00:24:00.012 filename=/dev/nvme0n1 00:24:00.012 [job1] 00:24:00.012 filename=/dev/nvme0n2 00:24:00.012 [job2] 00:24:00.012 filename=/dev/nvme0n3 00:24:00.012 [job3] 00:24:00.012 filename=/dev/nvme0n4 00:24:00.012 Could not set queue depth (nvme0n1) 00:24:00.012 Could not set queue depth (nvme0n2) 00:24:00.012 Could not set queue depth (nvme0n3) 00:24:00.012 Could not set queue depth (nvme0n4) 00:24:00.270 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:00.270 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:00.270 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:00.270 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:00.270 fio-3.35 00:24:00.270 Starting 4 threads 00:24:01.645 00:24:01.645 job0: (groupid=0, jobs=1): err= 0: pid=1605419: Thu Apr 25 20:18:59 2024 00:24:01.645 read: IOPS=4976, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1004msec) 00:24:01.645 slat (nsec): min=928, max=10681k, avg=104022.64, stdev=666066.29 00:24:01.645 clat (usec): min=856, max=66121, avg=12776.80, stdev=5779.93 00:24:01.645 lat (usec): min=4427, max=66126, avg=12880.82, stdev=5828.15 00:24:01.645 clat percentiles (usec): 00:24:01.645 | 1.00th=[ 5080], 5.00th=[ 7111], 10.00th=[ 7570], 20.00th=[ 8455], 00:24:01.645 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11076], 60.00th=[12518], 00:24:01.645 | 70.00th=[14091], 80.00th=[15401], 90.00th=[20317], 95.00th=[26084], 00:24:01.645 | 99.00th=[32113], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:24:01.645 | 99.99th=[66323] 00:24:01.645 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:24:01.645 slat (nsec): min=1459, max=8756.0k, avg=82088.61, stdev=502865.02 00:24:01.645 clat (usec): min=1039, max=37953, avg=12319.93, stdev=6924.67 00:24:01.645 lat (usec): min=1455, max=37957, avg=12402.02, stdev=6965.19 00:24:01.645 clat percentiles (usec): 00:24:01.646 | 1.00th=[ 1827], 5.00th=[ 5342], 10.00th=[ 6259], 20.00th=[ 8717], 00:24:01.646 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:24:01.646 | 70.00th=[11338], 80.00th=[14615], 90.00th=[22676], 95.00th=[30016], 00:24:01.646 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36963], 99.95th=[37487], 00:24:01.646 | 99.99th=[38011] 00:24:01.646 bw ( KiB/s): min=20480, max=20480, per=31.07%, avg=20480.00, stdev= 0.00, samples=2 00:24:01.646 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:24:01.646 lat (usec) : 1000=0.01% 00:24:01.646 lat (msec) : 2=0.67%, 4=1.09%, 10=36.85%, 20=49.66%, 50=11.69% 00:24:01.646 lat (msec) : 100=0.02% 00:24:01.646 cpu : usr=2.59%, sys=4.39%, ctx=449, majf=0, minf=1 00:24:01.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:01.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.646 issued rwts: total=4996,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.646 job1: (groupid=0, jobs=1): err= 0: pid=1605420: Thu Apr 25 20:18:59 2024 00:24:01.646 read: IOPS=2773, BW=10.8MiB/s (11.4MB/s)(11.3MiB/1044msec) 00:24:01.646 slat (nsec): min=828, max=22774k, avg=152327.05, stdev=1008517.38 00:24:01.646 clat (usec): min=7794, max=84483, avg=19918.38, stdev=13746.68 00:24:01.646 lat (usec): min=7798, max=86067, avg=20070.70, stdev=13811.23 00:24:01.646 clat percentiles (usec): 00:24:01.646 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[12387], 00:24:01.646 | 30.00th=[12911], 40.00th=[13304], 50.00th=[15008], 60.00th=[16712], 00:24:01.646 | 70.00th=[18744], 80.00th=[22414], 90.00th=[37487], 95.00th=[48497], 00:24:01.646 | 99.00th=[84411], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:24:01.646 | 99.99th=[84411] 00:24:01.646 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1044msec); 0 zone resets 00:24:01.646 slat (nsec): min=1429, max=15549k, avg=179231.95, stdev=830709.25 00:24:01.646 clat (usec): min=3198, max=86731, avg=24280.15, stdev=14631.70 00:24:01.646 lat (usec): min=3205, max=86735, avg=24459.38, stdev=14704.00 00:24:01.646 clat percentiles (usec): 00:24:01.646 | 1.00th=[ 8291], 5.00th=[11207], 10.00th=[12256], 20.00th=[13698], 00:24:01.646 | 30.00th=[16909], 40.00th=[18744], 50.00th=[20055], 60.00th=[21365], 00:24:01.646 | 70.00th=[23987], 80.00th=[31589], 90.00th=[44303], 95.00th=[53740], 00:24:01.646 | 99.00th=[82314], 99.50th=[85459], 99.90th=[86508], 99.95th=[86508], 00:24:01.646 | 99.99th=[86508] 00:24:01.646 bw ( KiB/s): min=12288, max=12288, per=18.64%, avg=12288.00, stdev= 0.00, samples=2 00:24:01.646 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:24:01.646 lat (msec) : 4=0.05%, 10=1.89%, 20=57.57%, 50=34.53%, 100=5.95% 00:24:01.646 cpu : usr=1.44%, sys=1.92%, ctx=419, majf=0, minf=1 00:24:01.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:01.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.646 issued rwts: total=2896,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.646 job2: (groupid=0, jobs=1): err= 0: pid=1605421: Thu Apr 25 20:18:59 2024 00:24:01.646 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:24:01.646 slat (nsec): min=901, max=8642.7k, avg=97895.92, stdev=596510.80 00:24:01.646 clat (usec): min=5825, max=26640, avg=12595.03, stdev=4258.45 00:24:01.646 lat (usec): min=5828, max=26648, avg=12692.93, stdev=4294.46 00:24:01.646 clat percentiles (usec): 00:24:01.646 | 1.00th=[ 6521], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[ 9110], 00:24:01.646 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11338], 60.00th=[11994], 00:24:01.646 | 70.00th=[12911], 80.00th=[16188], 90.00th=[20055], 95.00th=[21627], 00:24:01.646 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25822], 99.95th=[25822], 00:24:01.646 | 99.99th=[26608] 00:24:01.646 write: IOPS=5306, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1004msec); 0 zone resets 00:24:01.646 slat (nsec): min=1506, max=7472.7k, avg=89145.87, stdev=497672.00 00:24:01.646 clat (usec): min=1174, max=24591, avg=11643.42, stdev=3096.87 00:24:01.646 lat (usec): min=1210, max=24624, avg=11732.56, stdev=3122.52 00:24:01.646 clat percentiles (usec): 00:24:01.646 | 1.00th=[ 4490], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 8979], 00:24:01.646 | 30.00th=[10028], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:24:01.646 | 70.00th=[12518], 80.00th=[13698], 90.00th=[15795], 95.00th=[17695], 00:24:01.646 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21890], 99.95th=[23200], 00:24:01.646 | 99.99th=[24511] 00:24:01.646 bw ( KiB/s): min=17032, max=24576, per=31.56%, avg=20804.00, stdev=5334.41, samples=2 00:24:01.646 iops : min= 4258, max= 6144, avg=5201.00, stdev=1333.60, samples=2 00:24:01.646 lat (msec) : 2=0.10%, 4=0.31%, 10=28.61%, 20=64.86%, 50=6.13% 00:24:01.646 cpu : usr=2.89%, sys=4.49%, ctx=542, majf=0, minf=1 00:24:01.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:01.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.646 issued rwts: total=5120,5328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.646 job3: (groupid=0, jobs=1): err= 0: pid=1605422: Thu Apr 25 20:18:59 2024 00:24:01.646 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:24:01.646 slat (nsec): min=981, max=11989k, avg=124562.54, stdev=784950.21 00:24:01.646 clat (usec): min=4532, max=42457, avg=13971.50, stdev=6695.99 00:24:01.646 lat (usec): min=4537, max=42466, avg=14096.06, stdev=6748.50 00:24:01.646 clat percentiles (usec): 00:24:01.646 | 1.00th=[ 5735], 5.00th=[ 7767], 10.00th=[ 8160], 20.00th=[ 9241], 00:24:01.646 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11600], 60.00th=[12911], 00:24:01.646 | 70.00th=[14746], 80.00th=[18220], 90.00th=[23725], 95.00th=[28443], 00:24:01.646 | 99.00th=[39584], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:24:01.646 | 99.99th=[42206] 00:24:01.646 write: IOPS=3659, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1007msec); 0 zone resets 00:24:01.646 slat (nsec): min=1551, max=6970.2k, avg=146361.59, stdev=558126.22 00:24:01.646 clat (usec): min=1035, max=51951, avg=21090.01, stdev=8707.99 00:24:01.646 lat (usec): min=1088, max=51958, avg=21236.37, stdev=8763.58 00:24:01.646 clat percentiles (usec): 00:24:01.646 | 1.00th=[ 3458], 5.00th=[ 6783], 10.00th=[ 8717], 20.00th=[15926], 00:24:01.646 | 30.00th=[17171], 40.00th=[18744], 50.00th=[20317], 60.00th=[22152], 00:24:01.646 | 70.00th=[23987], 80.00th=[27657], 90.00th=[32113], 95.00th=[36439], 00:24:01.646 | 99.00th=[47973], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:24:01.646 | 99.99th=[52167] 00:24:01.646 bw ( KiB/s): min=12288, max=16384, per=21.75%, avg=14336.00, stdev=2896.31, samples=2 00:24:01.646 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:24:01.646 lat (msec) : 2=0.01%, 4=0.96%, 10=21.47%, 20=42.88%, 50=34.50% 00:24:01.646 lat (msec) : 100=0.17% 00:24:01.646 cpu : usr=2.09%, sys=3.58%, ctx=526, majf=0, minf=1 00:24:01.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:01.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.646 issued rwts: total=3584,3685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.646 00:24:01.646 Run status group 0 (all jobs): 00:24:01.646 READ: bw=62.1MiB/s (65.1MB/s), 10.8MiB/s-19.9MiB/s (11.4MB/s-20.9MB/s), io=64.8MiB (68.0MB), run=1004-1044msec 00:24:01.646 WRITE: bw=64.4MiB/s (67.5MB/s), 11.5MiB/s-20.7MiB/s (12.1MB/s-21.7MB/s), io=67.2MiB (70.5MB), run=1004-1044msec 00:24:01.646 00:24:01.646 Disk stats (read/write): 00:24:01.646 nvme0n1: ios=4126/4352, merge=0/0, ticks=32875/37531, in_queue=70406, util=96.89% 00:24:01.646 nvme0n2: ios=2565/2655, merge=0/0, ticks=18360/27542, in_queue=45902, util=85.63% 00:24:01.646 nvme0n3: ios=4368/4608, merge=0/0, ticks=19833/19885, in_queue=39718, util=96.11% 00:24:01.646 nvme0n4: ios=2566/3071, merge=0/0, ticks=36754/66770, in_queue=103524, util=89.56% 00:24:01.646 20:18:59 -- target/fio.sh@55 -- # sync 00:24:01.646 20:18:59 -- target/fio.sh@59 -- # fio_pid=1605676 00:24:01.646 20:18:59 -- target/fio.sh@61 -- # sleep 3 00:24:01.646 20:18:59 -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:24:01.646 [global] 00:24:01.646 thread=1 00:24:01.646 invalidate=1 00:24:01.646 rw=read 00:24:01.646 time_based=1 00:24:01.646 runtime=10 00:24:01.646 ioengine=libaio 00:24:01.646 direct=1 00:24:01.646 bs=4096 00:24:01.646 iodepth=1 00:24:01.646 norandommap=1 00:24:01.646 numjobs=1 00:24:01.646 00:24:01.646 [job0] 00:24:01.646 filename=/dev/nvme0n1 00:24:01.646 [job1] 00:24:01.646 filename=/dev/nvme0n2 00:24:01.646 [job2] 00:24:01.646 filename=/dev/nvme0n3 00:24:01.646 [job3] 00:24:01.646 filename=/dev/nvme0n4 00:24:01.646 Could not set queue depth (nvme0n1) 00:24:01.646 Could not set queue depth (nvme0n2) 00:24:01.646 Could not set queue depth (nvme0n3) 00:24:01.646 Could not set queue depth (nvme0n4) 00:24:01.904 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:01.904 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:01.904 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:01.904 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:01.904 fio-3.35 00:24:01.904 Starting 4 threads 00:24:05.192 20:19:02 -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:24:05.192 20:19:02 -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:24:05.192 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=364544, buflen=4096 00:24:05.192 fio: pid=1605903, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:05.192 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=36618240, buflen=4096 00:24:05.192 fio: pid=1605902, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:05.192 20:19:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:05.192 20:19:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:24:05.192 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=1474560, buflen=4096 00:24:05.192 fio: pid=1605900, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:05.192 20:19:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:05.192 20:19:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:24:05.192 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=303104, buflen=4096 00:24:05.192 fio: pid=1605901, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:05.192 20:19:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:05.192 20:19:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:24:05.192 00:24:05.192 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1605900: Thu Apr 25 20:19:03 2024 00:24:05.192 read: IOPS=124, BW=497KiB/s (509kB/s)(1440KiB/2898msec) 00:24:05.192 slat (usec): min=5, max=10878, avg=42.91, stdev=572.00 00:24:05.192 clat (usec): min=181, max=45235, avg=8002.10, stdev=16249.44 00:24:05.192 lat (usec): min=189, max=52948, avg=8045.00, stdev=16332.93 00:24:05.192 clat percentiles (usec): 00:24:05.192 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:24:05.192 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 241], 60.00th=[ 251], 00:24:05.192 | 70.00th=[ 269], 80.00th=[ 445], 90.00th=[42206], 95.00th=[42206], 00:24:05.192 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:24:05.192 | 99.99th=[45351] 00:24:05.192 bw ( KiB/s): min= 96, max= 2416, per=4.53%, avg=560.00, stdev=1037.54, samples=5 00:24:05.192 iops : min= 24, max= 604, avg=140.00, stdev=259.38, samples=5 00:24:05.192 lat (usec) : 250=59.28%, 500=21.33%, 750=0.28%, 1000=0.28% 00:24:05.192 lat (msec) : 50=18.56% 00:24:05.192 cpu : usr=0.00%, sys=0.28%, ctx=362, majf=0, minf=1 00:24:05.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:05.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.192 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.192 issued rwts: total=361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:05.192 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1605901: Thu Apr 25 20:19:03 2024 00:24:05.192 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(296KiB/3064msec) 00:24:05.192 slat (usec): min=7, max=12517, avg=449.90, stdev=2105.47 00:24:05.192 clat (usec): min=661, max=50312, avg=40935.07, stdev=6811.24 00:24:05.192 lat (usec): min=698, max=54974, avg=41293.71, stdev=6171.38 00:24:05.192 clat percentiles (usec): 00:24:05.192 | 1.00th=[ 660], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:24:05.192 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:24:05.192 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:24:05.192 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:24:05.192 | 99.99th=[50070] 00:24:05.192 bw ( KiB/s): min= 96, max= 96, per=0.78%, avg=96.00, stdev= 0.00, samples=5 00:24:05.192 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:24:05.192 lat (usec) : 750=1.33%, 1000=1.33% 00:24:05.192 lat (msec) : 50=94.67%, 100=1.33% 00:24:05.192 cpu : usr=0.00%, sys=0.20%, ctx=78, majf=0, minf=1 00:24:05.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:05.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.192 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.192 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:05.192 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1605902: Thu Apr 25 20:19:03 2024 00:24:05.192 read: IOPS=3275, BW=12.8MiB/s (13.4MB/s)(34.9MiB/2730msec) 00:24:05.192 slat (usec): min=3, max=15373, avg=12.08, stdev=204.35 00:24:05.192 clat (usec): min=189, max=44643, avg=291.64, stdev=471.88 00:24:05.192 lat (usec): min=196, max=44651, avg=303.71, stdev=516.46 00:24:05.192 clat percentiles (usec): 00:24:05.192 | 1.00th=[ 221], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 258], 00:24:05.192 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:24:05.192 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 367], 95.00th=[ 408], 00:24:05.192 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 562], 99.95th=[ 627], 00:24:05.192 | 99.99th=[44827] 00:24:05.192 bw ( KiB/s): min=11608, max=14536, per=100.00%, avg=13176.00, stdev=1124.01, samples=5 00:24:05.192 iops : min= 2902, max= 3634, avg=3294.00, stdev=281.00, samples=5 00:24:05.192 lat (usec) : 250=13.49%, 500=86.11%, 750=0.36%, 1000=0.02% 00:24:05.192 lat (msec) : 50=0.01% 00:24:05.192 cpu : usr=0.88%, sys=3.63%, ctx=8943, majf=0, minf=1 00:24:05.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:05.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.192 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.193 issued rwts: total=8941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:05.193 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1605903: Thu Apr 25 20:19:03 2024 00:24:05.193 read: IOPS=34, BW=136KiB/s (139kB/s)(356KiB/2615msec) 00:24:05.193 slat (nsec): min=6105, max=41122, avg=19142.44, stdev=13751.25 00:24:05.193 clat (usec): min=229, max=45045, avg=29344.68, stdev=19259.60 00:24:05.193 lat (usec): min=236, max=45073, avg=29363.88, stdev=19265.37 00:24:05.193 clat percentiles (usec): 00:24:05.193 | 1.00th=[ 231], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 310], 00:24:05.193 | 30.00th=[ 791], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:24:05.193 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:24:05.193 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:24:05.193 | 99.99th=[44827] 00:24:05.193 bw ( KiB/s): min= 96, max= 304, per=1.11%, avg=137.60, stdev=93.02, samples=5 00:24:05.193 iops : min= 24, max= 76, avg=34.40, stdev=23.26, samples=5 00:24:05.193 lat (usec) : 250=3.33%, 500=23.33%, 750=1.11%, 1000=2.22% 00:24:05.193 lat (msec) : 50=68.89% 00:24:05.193 cpu : usr=0.00%, sys=0.11%, ctx=90, majf=0, minf=2 00:24:05.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:05.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.193 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.193 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:05.193 00:24:05.193 Run status group 0 (all jobs): 00:24:05.193 READ: bw=12.1MiB/s (12.7MB/s), 96.6KiB/s-12.8MiB/s (98.9kB/s-13.4MB/s), io=37.0MiB (38.8MB), run=2615-3064msec 00:24:05.193 00:24:05.193 Disk stats (read/write): 00:24:05.193 nvme0n1: ios=358/0, merge=0/0, ticks=2799/0, in_queue=2799, util=94.56% 00:24:05.193 nvme0n2: ios=84/0, merge=0/0, ticks=2828/0, in_queue=2828, util=95.10% 00:24:05.193 nvme0n3: ios=8556/0, merge=0/0, ticks=2473/0, in_queue=2473, util=96.07% 00:24:05.193 nvme0n4: ios=88/0, merge=0/0, ticks=2571/0, in_queue=2571, util=96.43% 00:24:05.452 20:19:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:05.452 20:19:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:24:05.452 20:19:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:05.452 20:19:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:24:05.709 20:19:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:05.709 20:19:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:24:05.967 20:19:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:05.967 20:19:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:24:05.967 20:19:03 -- target/fio.sh@69 -- # fio_status=0 00:24:05.967 20:19:03 -- target/fio.sh@70 -- # wait 1605676 00:24:05.967 20:19:03 -- target/fio.sh@70 -- # fio_status=4 00:24:05.967 20:19:03 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:06.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:06.535 20:19:04 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:06.535 20:19:04 -- common/autotest_common.sh@1198 -- # local i=0 00:24:06.535 20:19:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:06.535 20:19:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:06.535 20:19:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:06.535 20:19:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:06.535 20:19:04 -- common/autotest_common.sh@1210 -- # return 0 00:24:06.535 20:19:04 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:24:06.535 20:19:04 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:24:06.535 nvmf hotplug test: fio failed as expected 00:24:06.535 20:19:04 -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.535 20:19:04 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:24:06.535 20:19:04 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:24:06.535 20:19:04 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:24:06.535 20:19:04 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:24:06.535 20:19:04 -- target/fio.sh@91 -- # nvmftestfini 00:24:06.535 20:19:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:06.535 20:19:04 -- nvmf/common.sh@116 -- # sync 00:24:06.535 20:19:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:06.535 20:19:04 -- nvmf/common.sh@119 -- # set +e 00:24:06.535 20:19:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:06.535 20:19:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:06.535 rmmod nvme_tcp 00:24:06.535 rmmod nvme_fabrics 00:24:06.535 rmmod nvme_keyring 00:24:06.796 20:19:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:06.796 20:19:04 -- nvmf/common.sh@123 -- # set -e 00:24:06.796 20:19:04 -- nvmf/common.sh@124 -- # return 0 00:24:06.796 20:19:04 -- nvmf/common.sh@477 -- # '[' -n 1602276 ']' 00:24:06.796 20:19:04 -- nvmf/common.sh@478 -- # killprocess 1602276 00:24:06.796 20:19:04 -- common/autotest_common.sh@926 -- # '[' -z 1602276 ']' 00:24:06.796 20:19:04 -- common/autotest_common.sh@930 -- # kill -0 1602276 00:24:06.796 20:19:04 -- common/autotest_common.sh@931 -- # uname 00:24:06.796 20:19:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:06.796 20:19:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1602276 00:24:06.796 20:19:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:06.796 20:19:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:06.796 20:19:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1602276' 00:24:06.796 killing process with pid 1602276 00:24:06.796 20:19:04 -- common/autotest_common.sh@945 -- # kill 1602276 00:24:06.796 20:19:04 -- common/autotest_common.sh@950 -- # wait 1602276 00:24:07.365 20:19:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:07.365 20:19:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:07.365 20:19:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:07.365 20:19:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.365 20:19:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:07.365 20:19:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.365 20:19:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.365 20:19:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.269 20:19:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:09.269 00:24:09.269 real 0m26.412s 00:24:09.269 user 2m36.570s 00:24:09.269 sys 0m6.952s 00:24:09.269 20:19:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.269 20:19:07 -- common/autotest_common.sh@10 -- # set +x 00:24:09.269 ************************************ 00:24:09.269 END TEST nvmf_fio_target 00:24:09.269 ************************************ 00:24:09.269 20:19:07 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:24:09.269 20:19:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:09.269 20:19:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:09.269 20:19:07 -- common/autotest_common.sh@10 -- # set +x 00:24:09.269 ************************************ 00:24:09.269 START TEST nvmf_bdevio 00:24:09.269 ************************************ 00:24:09.269 20:19:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:24:09.269 * Looking for test storage... 00:24:09.269 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:09.269 20:19:07 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.269 20:19:07 -- nvmf/common.sh@7 -- # uname -s 00:24:09.269 20:19:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.269 20:19:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.269 20:19:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.269 20:19:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.269 20:19:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.269 20:19:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.269 20:19:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.269 20:19:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.269 20:19:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.269 20:19:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.269 20:19:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:09.269 20:19:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:09.269 20:19:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.269 20:19:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.269 20:19:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:09.269 20:19:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:09.269 20:19:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.269 20:19:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.269 20:19:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.269 20:19:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.269 20:19:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.269 20:19:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.269 20:19:07 -- paths/export.sh@5 -- # export PATH 00:24:09.269 20:19:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.269 20:19:07 -- nvmf/common.sh@46 -- # : 0 00:24:09.269 20:19:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:09.269 20:19:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:09.269 20:19:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:09.269 20:19:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.269 20:19:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.270 20:19:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:09.270 20:19:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:09.270 20:19:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:09.270 20:19:07 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:09.270 20:19:07 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:09.270 20:19:07 -- target/bdevio.sh@14 -- # nvmftestinit 00:24:09.270 20:19:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:09.270 20:19:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.270 20:19:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:09.270 20:19:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:09.270 20:19:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:09.270 20:19:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.270 20:19:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.270 20:19:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.270 20:19:07 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:09.270 20:19:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:09.270 20:19:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:09.270 20:19:07 -- common/autotest_common.sh@10 -- # set +x 00:24:14.534 20:19:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:14.534 20:19:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:14.534 20:19:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:14.534 20:19:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:14.534 20:19:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:14.534 20:19:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:14.534 20:19:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:14.534 20:19:11 -- nvmf/common.sh@294 -- # net_devs=() 00:24:14.534 20:19:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:14.534 20:19:11 -- nvmf/common.sh@295 -- # e810=() 00:24:14.534 20:19:11 -- nvmf/common.sh@295 -- # local -ga e810 00:24:14.534 20:19:11 -- nvmf/common.sh@296 -- # x722=() 00:24:14.534 20:19:11 -- nvmf/common.sh@296 -- # local -ga x722 00:24:14.534 20:19:11 -- nvmf/common.sh@297 -- # mlx=() 00:24:14.534 20:19:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:14.534 20:19:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.534 20:19:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:14.534 20:19:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:14.534 20:19:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:14.534 20:19:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:14.534 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:14.534 20:19:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:14.534 20:19:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:14.534 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:14.534 20:19:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:14.534 20:19:11 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:14.534 20:19:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.534 20:19:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:14.534 20:19:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.534 20:19:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:14.534 Found net devices under 0000:27:00.0: cvl_0_0 00:24:14.534 20:19:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.534 20:19:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:14.534 20:19:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.534 20:19:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:14.534 20:19:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.534 20:19:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:14.534 Found net devices under 0000:27:00.1: cvl_0_1 00:24:14.534 20:19:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.534 20:19:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:14.534 20:19:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:14.534 20:19:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:14.534 20:19:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:14.534 20:19:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.534 20:19:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.534 20:19:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.534 20:19:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:14.534 20:19:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.534 20:19:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.534 20:19:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:14.534 20:19:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.534 20:19:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.534 20:19:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:14.534 20:19:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:14.534 20:19:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.534 20:19:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.534 20:19:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.534 20:19:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.534 20:19:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:14.534 20:19:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.534 20:19:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.534 20:19:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.534 20:19:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:14.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:24:14.534 00:24:14.534 --- 10.0.0.2 ping statistics --- 00:24:14.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.534 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:24:14.534 20:19:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:24:14.534 00:24:14.534 --- 10.0.0.1 ping statistics --- 00:24:14.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.534 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:24:14.534 20:19:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.534 20:19:12 -- nvmf/common.sh@410 -- # return 0 00:24:14.534 20:19:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:14.534 20:19:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.534 20:19:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:14.534 20:19:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:14.534 20:19:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.534 20:19:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:14.534 20:19:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:14.534 20:19:12 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:14.534 20:19:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:14.534 20:19:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:14.534 20:19:12 -- common/autotest_common.sh@10 -- # set +x 00:24:14.534 20:19:12 -- nvmf/common.sh@469 -- # nvmfpid=1610708 00:24:14.534 20:19:12 -- nvmf/common.sh@470 -- # waitforlisten 1610708 00:24:14.534 20:19:12 -- common/autotest_common.sh@819 -- # '[' -z 1610708 ']' 00:24:14.534 20:19:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.534 20:19:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:14.534 20:19:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.534 20:19:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:14.534 20:19:12 -- common/autotest_common.sh@10 -- # set +x 00:24:14.534 20:19:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:24:14.534 [2024-04-25 20:19:12.332953] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:14.534 [2024-04-25 20:19:12.333054] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.534 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.534 [2024-04-25 20:19:12.446148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.792 [2024-04-25 20:19:12.546875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:14.792 [2024-04-25 20:19:12.547084] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.792 [2024-04-25 20:19:12.547099] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.792 [2024-04-25 20:19:12.547111] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.792 [2024-04-25 20:19:12.547338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:14.792 [2024-04-25 20:19:12.547470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:14.792 [2024-04-25 20:19:12.547574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.792 [2024-04-25 20:19:12.547602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:15.367 20:19:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:15.367 20:19:13 -- common/autotest_common.sh@852 -- # return 0 00:24:15.367 20:19:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:15.367 20:19:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:15.367 20:19:13 -- common/autotest_common.sh@10 -- # set +x 00:24:15.367 20:19:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.367 20:19:13 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:15.367 20:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.367 20:19:13 -- common/autotest_common.sh@10 -- # set +x 00:24:15.367 [2024-04-25 20:19:13.078856] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.367 20:19:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.367 20:19:13 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:15.367 20:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.367 20:19:13 -- common/autotest_common.sh@10 -- # set +x 00:24:15.367 Malloc0 00:24:15.367 20:19:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.367 20:19:13 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.367 20:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.367 20:19:13 -- common/autotest_common.sh@10 -- # set +x 00:24:15.367 20:19:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.367 20:19:13 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:15.367 20:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.367 20:19:13 -- common/autotest_common.sh@10 -- # set +x 00:24:15.367 20:19:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.367 20:19:13 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.367 20:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:15.367 20:19:13 -- common/autotest_common.sh@10 -- # set +x 00:24:15.367 [2024-04-25 20:19:13.148913] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.367 20:19:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:15.367 20:19:13 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:24:15.367 20:19:13 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:15.367 20:19:13 -- nvmf/common.sh@520 -- # config=() 00:24:15.367 20:19:13 -- nvmf/common.sh@520 -- # local subsystem config 00:24:15.367 20:19:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.367 20:19:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.367 { 00:24:15.367 "params": { 00:24:15.367 "name": "Nvme$subsystem", 00:24:15.367 "trtype": "$TEST_TRANSPORT", 00:24:15.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.367 "adrfam": "ipv4", 00:24:15.367 "trsvcid": "$NVMF_PORT", 00:24:15.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.367 "hdgst": ${hdgst:-false}, 00:24:15.367 "ddgst": ${ddgst:-false} 00:24:15.367 }, 00:24:15.367 "method": "bdev_nvme_attach_controller" 00:24:15.367 } 00:24:15.367 EOF 00:24:15.367 )") 00:24:15.367 20:19:13 -- nvmf/common.sh@542 -- # cat 00:24:15.367 20:19:13 -- nvmf/common.sh@544 -- # jq . 00:24:15.367 20:19:13 -- nvmf/common.sh@545 -- # IFS=, 00:24:15.367 20:19:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:15.367 "params": { 00:24:15.367 "name": "Nvme1", 00:24:15.367 "trtype": "tcp", 00:24:15.367 "traddr": "10.0.0.2", 00:24:15.367 "adrfam": "ipv4", 00:24:15.367 "trsvcid": "4420", 00:24:15.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.367 "hdgst": false, 00:24:15.367 "ddgst": false 00:24:15.367 }, 00:24:15.367 "method": "bdev_nvme_attach_controller" 00:24:15.367 }' 00:24:15.367 [2024-04-25 20:19:13.217930] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:15.367 [2024-04-25 20:19:13.218034] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611021 ] 00:24:15.367 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.627 [2024-04-25 20:19:13.315881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:15.627 [2024-04-25 20:19:13.410109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.627 [2024-04-25 20:19:13.410210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.627 [2024-04-25 20:19:13.410216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.885 [2024-04-25 20:19:13.669635] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:15.885 [2024-04-25 20:19:13.669671] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:15.885 I/O targets: 00:24:15.885 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:15.885 00:24:15.885 00:24:15.885 CUnit - A unit testing framework for C - Version 2.1-3 00:24:15.885 http://cunit.sourceforge.net/ 00:24:15.885 00:24:15.885 00:24:15.885 Suite: bdevio tests on: Nvme1n1 00:24:15.885 Test: blockdev write read block ...passed 00:24:15.885 Test: blockdev write zeroes read block ...passed 00:24:15.885 Test: blockdev write zeroes read no split ...passed 00:24:15.885 Test: blockdev write zeroes read split ...passed 00:24:15.885 Test: blockdev write zeroes read split partial ...passed 00:24:15.885 Test: blockdev reset ...[2024-04-25 20:19:13.797776] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.885 [2024-04-25 20:19:13.797861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:24:16.144 [2024-04-25 20:19:13.936188] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:16.145 passed 00:24:16.145 Test: blockdev write read 8 blocks ...passed 00:24:16.145 Test: blockdev write read size > 128k ...passed 00:24:16.145 Test: blockdev write read invalid size ...passed 00:24:16.145 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:16.145 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:16.145 Test: blockdev write read max offset ...passed 00:24:16.145 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:16.403 Test: blockdev writev readv 8 blocks ...passed 00:24:16.403 Test: blockdev writev readv 30 x 1block ...passed 00:24:16.403 Test: blockdev writev readv block ...passed 00:24:16.403 Test: blockdev writev readv size > 128k ...passed 00:24:16.403 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:16.403 Test: blockdev comparev and writev ...[2024-04-25 20:19:14.155343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:16.403 [2024-04-25 20:19:14.155384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.155403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:16.403 [2024-04-25 20:19:14.155414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.155769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:16.403 [2024-04-25 20:19:14.155781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.155798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:16.403 [2024-04-25 20:19:14.155809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.156155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:16.403 [2024-04-25 20:19:14.156166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.156178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:16.403 [2024-04-25 20:19:14.156186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.156542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:16.403 [2024-04-25 20:19:14.156556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.156568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:16.403 [2024-04-25 20:19:14.156577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:16.403 passed 00:24:16.403 Test: blockdev nvme passthru rw ...passed 00:24:16.403 Test: blockdev nvme passthru vendor specific ...[2024-04-25 20:19:14.240133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:16.403 [2024-04-25 20:19:14.240158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.240358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:16.403 [2024-04-25 20:19:14.240368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.240556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:16.403 [2024-04-25 20:19:14.240566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:16.403 [2024-04-25 20:19:14.240759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:16.403 [2024-04-25 20:19:14.240769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:16.403 passed 00:24:16.403 Test: blockdev nvme admin passthru ...passed 00:24:16.403 Test: blockdev copy ...passed 00:24:16.403 00:24:16.403 Run Summary: Type Total Ran Passed Failed Inactive 00:24:16.403 suites 1 1 n/a 0 0 00:24:16.403 tests 23 23 23 0 0 00:24:16.403 asserts 152 152 152 0 n/a 00:24:16.403 00:24:16.403 Elapsed time = 1.273 seconds 00:24:16.969 20:19:14 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.969 20:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.969 20:19:14 -- common/autotest_common.sh@10 -- # set +x 00:24:16.969 20:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.969 20:19:14 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:16.969 20:19:14 -- target/bdevio.sh@30 -- # nvmftestfini 00:24:16.969 20:19:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:16.969 20:19:14 -- nvmf/common.sh@116 -- # sync 00:24:16.969 20:19:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:16.969 20:19:14 -- nvmf/common.sh@119 -- # set +e 00:24:16.969 20:19:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:16.969 20:19:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:16.969 rmmod nvme_tcp 00:24:16.969 rmmod nvme_fabrics 00:24:16.969 rmmod nvme_keyring 00:24:16.969 20:19:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:16.969 20:19:14 -- nvmf/common.sh@123 -- # set -e 00:24:16.969 20:19:14 -- nvmf/common.sh@124 -- # return 0 00:24:16.969 20:19:14 -- nvmf/common.sh@477 -- # '[' -n 1610708 ']' 00:24:16.969 20:19:14 -- nvmf/common.sh@478 -- # killprocess 1610708 00:24:16.969 20:19:14 -- common/autotest_common.sh@926 -- # '[' -z 1610708 ']' 00:24:16.969 20:19:14 -- common/autotest_common.sh@930 -- # kill -0 1610708 00:24:16.969 20:19:14 -- common/autotest_common.sh@931 -- # uname 00:24:16.969 20:19:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:16.969 20:19:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1610708 00:24:16.969 20:19:14 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:24:16.969 20:19:14 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:24:16.969 20:19:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1610708' 00:24:16.969 killing process with pid 1610708 00:24:16.969 20:19:14 -- common/autotest_common.sh@945 -- # kill 1610708 00:24:16.969 20:19:14 -- common/autotest_common.sh@950 -- # wait 1610708 00:24:17.534 20:19:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:17.534 20:19:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:17.534 20:19:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:17.534 20:19:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.534 20:19:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:17.534 20:19:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.534 20:19:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.534 20:19:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.435 20:19:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:19.435 00:24:19.435 real 0m10.262s 00:24:19.435 user 0m14.751s 00:24:19.435 sys 0m4.356s 00:24:19.435 20:19:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.435 20:19:17 -- common/autotest_common.sh@10 -- # set +x 00:24:19.435 ************************************ 00:24:19.435 END TEST nvmf_bdevio 00:24:19.435 ************************************ 00:24:19.693 20:19:17 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:24:19.693 20:19:17 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:19.693 20:19:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:19.693 20:19:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:19.693 20:19:17 -- common/autotest_common.sh@10 -- # set +x 00:24:19.693 ************************************ 00:24:19.693 START TEST nvmf_bdevio_no_huge 00:24:19.693 ************************************ 00:24:19.693 20:19:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:19.693 * Looking for test storage... 00:24:19.693 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:19.693 20:19:17 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.693 20:19:17 -- nvmf/common.sh@7 -- # uname -s 00:24:19.693 20:19:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.693 20:19:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.693 20:19:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.693 20:19:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.693 20:19:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.693 20:19:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.693 20:19:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.693 20:19:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.693 20:19:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.693 20:19:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.693 20:19:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:19.693 20:19:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:19.693 20:19:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.693 20:19:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.693 20:19:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:19.693 20:19:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:19.693 20:19:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.693 20:19:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.693 20:19:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.693 20:19:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.693 20:19:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.693 20:19:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.693 20:19:17 -- paths/export.sh@5 -- # export PATH 00:24:19.693 20:19:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.693 20:19:17 -- nvmf/common.sh@46 -- # : 0 00:24:19.694 20:19:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:19.694 20:19:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:19.694 20:19:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:19.694 20:19:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.694 20:19:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.694 20:19:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:19.694 20:19:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:19.694 20:19:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:19.694 20:19:17 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:19.694 20:19:17 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:19.694 20:19:17 -- target/bdevio.sh@14 -- # nvmftestinit 00:24:19.694 20:19:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:19.694 20:19:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.694 20:19:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:19.694 20:19:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:19.694 20:19:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:19.694 20:19:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.694 20:19:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.694 20:19:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.694 20:19:17 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:19.694 20:19:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:19.694 20:19:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:19.694 20:19:17 -- common/autotest_common.sh@10 -- # set +x 00:24:25.038 20:19:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:25.038 20:19:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:25.038 20:19:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:25.038 20:19:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:25.038 20:19:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:25.038 20:19:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:25.038 20:19:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:25.038 20:19:22 -- nvmf/common.sh@294 -- # net_devs=() 00:24:25.038 20:19:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:25.038 20:19:22 -- nvmf/common.sh@295 -- # e810=() 00:24:25.038 20:19:22 -- nvmf/common.sh@295 -- # local -ga e810 00:24:25.038 20:19:22 -- nvmf/common.sh@296 -- # x722=() 00:24:25.038 20:19:22 -- nvmf/common.sh@296 -- # local -ga x722 00:24:25.038 20:19:22 -- nvmf/common.sh@297 -- # mlx=() 00:24:25.038 20:19:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:25.038 20:19:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.038 20:19:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:25.038 20:19:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:25.038 20:19:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:25.038 20:19:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:25.038 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:25.038 20:19:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:25.038 20:19:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:25.038 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:25.038 20:19:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:25.038 20:19:22 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:25.038 20:19:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.038 20:19:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:25.038 20:19:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.038 20:19:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:25.038 Found net devices under 0000:27:00.0: cvl_0_0 00:24:25.038 20:19:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.038 20:19:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:25.038 20:19:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.038 20:19:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:25.038 20:19:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.038 20:19:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:25.038 Found net devices under 0000:27:00.1: cvl_0_1 00:24:25.038 20:19:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.038 20:19:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:25.038 20:19:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:25.038 20:19:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:25.038 20:19:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:25.038 20:19:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.038 20:19:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.038 20:19:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.038 20:19:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:25.038 20:19:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.038 20:19:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.038 20:19:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:25.038 20:19:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.038 20:19:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.038 20:19:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:25.038 20:19:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:25.038 20:19:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.038 20:19:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.038 20:19:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.038 20:19:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.038 20:19:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:25.038 20:19:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.038 20:19:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.038 20:19:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.038 20:19:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:25.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:24:25.039 00:24:25.039 --- 10.0.0.2 ping statistics --- 00:24:25.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.039 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:24:25.039 20:19:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:24:25.039 00:24:25.039 --- 10.0.0.1 ping statistics --- 00:24:25.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.039 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:24:25.039 20:19:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.039 20:19:22 -- nvmf/common.sh@410 -- # return 0 00:24:25.039 20:19:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:25.039 20:19:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.039 20:19:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:25.039 20:19:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:25.039 20:19:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.039 20:19:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:25.039 20:19:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:25.039 20:19:22 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:25.039 20:19:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:25.039 20:19:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:25.039 20:19:22 -- common/autotest_common.sh@10 -- # set +x 00:24:25.039 20:19:22 -- nvmf/common.sh@469 -- # nvmfpid=1615212 00:24:25.039 20:19:22 -- nvmf/common.sh@470 -- # waitforlisten 1615212 00:24:25.039 20:19:22 -- common/autotest_common.sh@819 -- # '[' -z 1615212 ']' 00:24:25.039 20:19:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.039 20:19:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:25.039 20:19:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.039 20:19:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:25.039 20:19:22 -- common/autotest_common.sh@10 -- # set +x 00:24:25.039 20:19:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:25.039 [2024-04-25 20:19:22.872717] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:25.039 [2024-04-25 20:19:22.872834] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:25.304 [2024-04-25 20:19:23.020260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:25.304 [2024-04-25 20:19:23.138705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:25.304 [2024-04-25 20:19:23.138900] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.304 [2024-04-25 20:19:23.138914] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.304 [2024-04-25 20:19:23.138924] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.304 [2024-04-25 20:19:23.139120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:25.304 [2024-04-25 20:19:23.139257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:25.304 [2024-04-25 20:19:23.139361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:25.304 [2024-04-25 20:19:23.139389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:25.873 20:19:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:25.873 20:19:23 -- common/autotest_common.sh@852 -- # return 0 00:24:25.873 20:19:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:25.873 20:19:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:25.873 20:19:23 -- common/autotest_common.sh@10 -- # set +x 00:24:25.873 20:19:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.873 20:19:23 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:25.873 20:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:25.873 20:19:23 -- common/autotest_common.sh@10 -- # set +x 00:24:25.873 [2024-04-25 20:19:23.624011] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.873 20:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:25.873 20:19:23 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:25.873 20:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:25.873 20:19:23 -- common/autotest_common.sh@10 -- # set +x 00:24:25.873 Malloc0 00:24:25.873 20:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:25.873 20:19:23 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:25.873 20:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:25.873 20:19:23 -- common/autotest_common.sh@10 -- # set +x 00:24:25.873 20:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:25.873 20:19:23 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:25.873 20:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:25.873 20:19:23 -- common/autotest_common.sh@10 -- # set +x 00:24:25.873 20:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:25.873 20:19:23 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.873 20:19:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:25.873 20:19:23 -- common/autotest_common.sh@10 -- # set +x 00:24:25.873 [2024-04-25 20:19:23.684565] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.873 20:19:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:25.873 20:19:23 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:25.873 20:19:23 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:25.873 20:19:23 -- nvmf/common.sh@520 -- # config=() 00:24:25.873 20:19:23 -- nvmf/common.sh@520 -- # local subsystem config 00:24:25.873 20:19:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:25.873 20:19:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:25.873 { 00:24:25.873 "params": { 00:24:25.873 "name": "Nvme$subsystem", 00:24:25.873 "trtype": "$TEST_TRANSPORT", 00:24:25.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.873 "adrfam": "ipv4", 00:24:25.873 "trsvcid": "$NVMF_PORT", 00:24:25.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.873 "hdgst": ${hdgst:-false}, 00:24:25.873 "ddgst": ${ddgst:-false} 00:24:25.873 }, 00:24:25.873 "method": "bdev_nvme_attach_controller" 00:24:25.873 } 00:24:25.873 EOF 00:24:25.873 )") 00:24:25.873 20:19:23 -- nvmf/common.sh@542 -- # cat 00:24:25.873 20:19:23 -- nvmf/common.sh@544 -- # jq . 00:24:25.873 20:19:23 -- nvmf/common.sh@545 -- # IFS=, 00:24:25.873 20:19:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:25.873 "params": { 00:24:25.873 "name": "Nvme1", 00:24:25.873 "trtype": "tcp", 00:24:25.873 "traddr": "10.0.0.2", 00:24:25.873 "adrfam": "ipv4", 00:24:25.873 "trsvcid": "4420", 00:24:25.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.873 "hdgst": false, 00:24:25.873 "ddgst": false 00:24:25.873 }, 00:24:25.873 "method": "bdev_nvme_attach_controller" 00:24:25.873 }' 00:24:25.873 [2024-04-25 20:19:23.767642] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:25.873 [2024-04-25 20:19:23.767777] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1615458 ] 00:24:26.133 [2024-04-25 20:19:23.912843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:26.133 [2024-04-25 20:19:24.032561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.133 [2024-04-25 20:19:24.032662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.133 [2024-04-25 20:19:24.032666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.393 [2024-04-25 20:19:24.287380] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:26.393 [2024-04-25 20:19:24.287430] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:26.393 I/O targets: 00:24:26.393 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:26.393 00:24:26.393 00:24:26.393 CUnit - A unit testing framework for C - Version 2.1-3 00:24:26.393 http://cunit.sourceforge.net/ 00:24:26.393 00:24:26.393 00:24:26.393 Suite: bdevio tests on: Nvme1n1 00:24:26.653 Test: blockdev write read block ...passed 00:24:26.653 Test: blockdev write zeroes read block ...passed 00:24:26.653 Test: blockdev write zeroes read no split ...passed 00:24:26.653 Test: blockdev write zeroes read split ...passed 00:24:26.654 Test: blockdev write zeroes read split partial ...passed 00:24:26.654 Test: blockdev reset ...[2024-04-25 20:19:24.463870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.654 [2024-04-25 20:19:24.463971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000002f80 (9): Bad file descriptor 00:24:26.654 [2024-04-25 20:19:24.515744] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:26.654 passed 00:24:26.654 Test: blockdev write read 8 blocks ...passed 00:24:26.654 Test: blockdev write read size > 128k ...passed 00:24:26.654 Test: blockdev write read invalid size ...passed 00:24:26.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:26.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:26.654 Test: blockdev write read max offset ...passed 00:24:26.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:26.913 Test: blockdev writev readv 8 blocks ...passed 00:24:26.913 Test: blockdev writev readv 30 x 1block ...passed 00:24:26.913 Test: blockdev writev readv block ...passed 00:24:26.913 Test: blockdev writev readv size > 128k ...passed 00:24:26.913 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:26.913 Test: blockdev comparev and writev ...[2024-04-25 20:19:24.691378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.913 [2024-04-25 20:19:24.691419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.691436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.913 [2024-04-25 20:19:24.691445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.691788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.913 [2024-04-25 20:19:24.691800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.691819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.913 [2024-04-25 20:19:24.691828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.692173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.913 [2024-04-25 20:19:24.692189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.692201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.913 [2024-04-25 20:19:24.692210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.692552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.913 [2024-04-25 20:19:24.692563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.692576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.913 [2024-04-25 20:19:24.692584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.913 passed 00:24:26.913 Test: blockdev nvme passthru rw ...passed 00:24:26.913 Test: blockdev nvme passthru vendor specific ...[2024-04-25 20:19:24.775810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.913 [2024-04-25 20:19:24.775836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.775959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.913 [2024-04-25 20:19:24.775968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.776069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.913 [2024-04-25 20:19:24.776079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.913 [2024-04-25 20:19:24.776198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.913 [2024-04-25 20:19:24.776209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.913 passed 00:24:26.913 Test: blockdev nvme admin passthru ...passed 00:24:26.913 Test: blockdev copy ...passed 00:24:26.913 00:24:26.913 Run Summary: Type Total Ran Passed Failed Inactive 00:24:26.913 suites 1 1 n/a 0 0 00:24:26.913 tests 23 23 23 0 0 00:24:26.913 asserts 152 152 152 0 n/a 00:24:26.913 00:24:26.913 Elapsed time = 1.104 seconds 00:24:27.480 20:19:25 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.480 20:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.480 20:19:25 -- common/autotest_common.sh@10 -- # set +x 00:24:27.480 20:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.480 20:19:25 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:27.480 20:19:25 -- target/bdevio.sh@30 -- # nvmftestfini 00:24:27.480 20:19:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:27.480 20:19:25 -- nvmf/common.sh@116 -- # sync 00:24:27.480 20:19:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:27.480 20:19:25 -- nvmf/common.sh@119 -- # set +e 00:24:27.480 20:19:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:27.480 20:19:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:27.480 rmmod nvme_tcp 00:24:27.480 rmmod nvme_fabrics 00:24:27.480 rmmod nvme_keyring 00:24:27.480 20:19:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:27.480 20:19:25 -- nvmf/common.sh@123 -- # set -e 00:24:27.480 20:19:25 -- nvmf/common.sh@124 -- # return 0 00:24:27.480 20:19:25 -- nvmf/common.sh@477 -- # '[' -n 1615212 ']' 00:24:27.480 20:19:25 -- nvmf/common.sh@478 -- # killprocess 1615212 00:24:27.480 20:19:25 -- common/autotest_common.sh@926 -- # '[' -z 1615212 ']' 00:24:27.480 20:19:25 -- common/autotest_common.sh@930 -- # kill -0 1615212 00:24:27.480 20:19:25 -- common/autotest_common.sh@931 -- # uname 00:24:27.480 20:19:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:27.480 20:19:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1615212 00:24:27.480 20:19:25 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:24:27.480 20:19:25 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:24:27.480 20:19:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1615212' 00:24:27.480 killing process with pid 1615212 00:24:27.480 20:19:25 -- common/autotest_common.sh@945 -- # kill 1615212 00:24:27.480 20:19:25 -- common/autotest_common.sh@950 -- # wait 1615212 00:24:28.051 20:19:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:28.051 20:19:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:28.051 20:19:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:28.051 20:19:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.051 20:19:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:28.051 20:19:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.051 20:19:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.051 20:19:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.956 20:19:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:29.956 00:24:29.956 real 0m10.378s 00:24:29.956 user 0m13.828s 00:24:29.956 sys 0m4.934s 00:24:29.956 20:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.956 20:19:27 -- common/autotest_common.sh@10 -- # set +x 00:24:29.956 ************************************ 00:24:29.956 END TEST nvmf_bdevio_no_huge 00:24:29.956 ************************************ 00:24:29.956 20:19:27 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:29.956 20:19:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:29.956 20:19:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:29.956 20:19:27 -- common/autotest_common.sh@10 -- # set +x 00:24:29.956 ************************************ 00:24:29.956 START TEST nvmf_tls 00:24:29.956 ************************************ 00:24:29.956 20:19:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:29.956 * Looking for test storage... 00:24:29.956 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:29.956 20:19:27 -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.956 20:19:27 -- nvmf/common.sh@7 -- # uname -s 00:24:29.956 20:19:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.956 20:19:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.956 20:19:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.956 20:19:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.956 20:19:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.956 20:19:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.956 20:19:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.956 20:19:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.956 20:19:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.956 20:19:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.956 20:19:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:29.956 20:19:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:29.956 20:19:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.956 20:19:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.956 20:19:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:29.956 20:19:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:29.956 20:19:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.956 20:19:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.956 20:19:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.956 20:19:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.956 20:19:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.956 20:19:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.956 20:19:27 -- paths/export.sh@5 -- # export PATH 00:24:29.956 20:19:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.956 20:19:27 -- nvmf/common.sh@46 -- # : 0 00:24:29.956 20:19:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:29.956 20:19:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:29.956 20:19:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:29.956 20:19:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.956 20:19:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.956 20:19:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:29.956 20:19:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:29.956 20:19:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:29.956 20:19:27 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:24:30.217 20:19:27 -- target/tls.sh@71 -- # nvmftestinit 00:24:30.217 20:19:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:30.217 20:19:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.217 20:19:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:30.217 20:19:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:30.217 20:19:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:30.217 20:19:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.217 20:19:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.217 20:19:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.217 20:19:27 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:30.217 20:19:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:30.217 20:19:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:30.217 20:19:27 -- common/autotest_common.sh@10 -- # set +x 00:24:35.499 20:19:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:35.499 20:19:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:35.499 20:19:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:35.499 20:19:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:35.499 20:19:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:35.499 20:19:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:35.499 20:19:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:35.499 20:19:32 -- nvmf/common.sh@294 -- # net_devs=() 00:24:35.499 20:19:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:35.499 20:19:32 -- nvmf/common.sh@295 -- # e810=() 00:24:35.499 20:19:32 -- nvmf/common.sh@295 -- # local -ga e810 00:24:35.499 20:19:32 -- nvmf/common.sh@296 -- # x722=() 00:24:35.499 20:19:32 -- nvmf/common.sh@296 -- # local -ga x722 00:24:35.499 20:19:32 -- nvmf/common.sh@297 -- # mlx=() 00:24:35.499 20:19:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:35.499 20:19:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.499 20:19:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:35.499 20:19:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:35.499 20:19:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:35.499 20:19:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:35.499 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:35.499 20:19:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:35.499 20:19:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:35.499 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:35.499 20:19:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:35.499 20:19:32 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:35.499 20:19:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.499 20:19:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:35.499 20:19:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.499 20:19:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:35.499 Found net devices under 0000:27:00.0: cvl_0_0 00:24:35.499 20:19:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.499 20:19:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:35.499 20:19:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.499 20:19:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:35.499 20:19:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.499 20:19:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:35.499 Found net devices under 0000:27:00.1: cvl_0_1 00:24:35.499 20:19:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.499 20:19:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:35.499 20:19:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:35.499 20:19:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:35.499 20:19:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:35.499 20:19:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.499 20:19:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.499 20:19:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.499 20:19:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:35.499 20:19:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.499 20:19:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.499 20:19:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:35.499 20:19:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.499 20:19:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.499 20:19:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:35.499 20:19:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:35.499 20:19:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.499 20:19:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.499 20:19:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.499 20:19:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.499 20:19:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:35.499 20:19:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.499 20:19:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.499 20:19:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.499 20:19:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:35.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:35.499 00:24:35.499 --- 10.0.0.2 ping statistics --- 00:24:35.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.499 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:35.499 20:19:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:24:35.499 00:24:35.499 --- 10.0.0.1 ping statistics --- 00:24:35.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.499 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:24:35.499 20:19:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.499 20:19:33 -- nvmf/common.sh@410 -- # return 0 00:24:35.499 20:19:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:35.499 20:19:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.499 20:19:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:35.499 20:19:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:35.499 20:19:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.499 20:19:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:35.499 20:19:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:35.499 20:19:33 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:35.499 20:19:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:35.499 20:19:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:35.499 20:19:33 -- common/autotest_common.sh@10 -- # set +x 00:24:35.499 20:19:33 -- nvmf/common.sh@469 -- # nvmfpid=1619694 00:24:35.499 20:19:33 -- nvmf/common.sh@470 -- # waitforlisten 1619694 00:24:35.499 20:19:33 -- common/autotest_common.sh@819 -- # '[' -z 1619694 ']' 00:24:35.499 20:19:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.499 20:19:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:35.499 20:19:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.499 20:19:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:35.499 20:19:33 -- common/autotest_common.sh@10 -- # set +x 00:24:35.499 20:19:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:35.499 [2024-04-25 20:19:33.106600] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:35.499 [2024-04-25 20:19:33.106669] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.499 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.499 [2024-04-25 20:19:33.196070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.499 [2024-04-25 20:19:33.286538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:35.499 [2024-04-25 20:19:33.286697] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.499 [2024-04-25 20:19:33.286712] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.499 [2024-04-25 20:19:33.286721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.499 [2024-04-25 20:19:33.286747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.069 20:19:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:36.069 20:19:33 -- common/autotest_common.sh@852 -- # return 0 00:24:36.069 20:19:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:36.069 20:19:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:36.069 20:19:33 -- common/autotest_common.sh@10 -- # set +x 00:24:36.069 20:19:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.069 20:19:33 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:24:36.069 20:19:33 -- target/tls.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:36.069 true 00:24:36.329 20:19:34 -- target/tls.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:36.329 20:19:34 -- target/tls.sh@82 -- # jq -r .tls_version 00:24:36.329 20:19:34 -- target/tls.sh@82 -- # version=0 00:24:36.329 20:19:34 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:24:36.329 20:19:34 -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:36.587 20:19:34 -- target/tls.sh@90 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:36.587 20:19:34 -- target/tls.sh@90 -- # jq -r .tls_version 00:24:36.587 20:19:34 -- target/tls.sh@90 -- # version=13 00:24:36.587 20:19:34 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:24:36.587 20:19:34 -- target/tls.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:36.845 20:19:34 -- target/tls.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:36.845 20:19:34 -- target/tls.sh@98 -- # jq -r .tls_version 00:24:36.845 20:19:34 -- target/tls.sh@98 -- # version=7 00:24:36.845 20:19:34 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:24:36.845 20:19:34 -- target/tls.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:36.845 20:19:34 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:37.104 20:19:34 -- target/tls.sh@105 -- # ktls=false 00:24:37.104 20:19:34 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:24:37.104 20:19:34 -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:37.104 20:19:34 -- target/tls.sh@113 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:37.104 20:19:34 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:37.362 20:19:35 -- target/tls.sh@113 -- # ktls=true 00:24:37.362 20:19:35 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:24:37.362 20:19:35 -- target/tls.sh@120 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:37.362 20:19:35 -- target/tls.sh@121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:37.362 20:19:35 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:24:37.621 20:19:35 -- target/tls.sh@121 -- # ktls=false 00:24:37.621 20:19:35 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:24:37.621 20:19:35 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:24:37.621 20:19:35 -- target/tls.sh@49 -- # local key hash crc 00:24:37.621 20:19:35 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:24:37.621 20:19:35 -- target/tls.sh@51 -- # hash=01 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # tail -c8 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # gzip -1 -c 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # head -c 4 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # crc='p$H�' 00:24:37.621 20:19:35 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:24:37.621 20:19:35 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:24:37.621 20:19:35 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:37.621 20:19:35 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:37.621 20:19:35 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:24:37.621 20:19:35 -- target/tls.sh@49 -- # local key hash crc 00:24:37.621 20:19:35 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:24:37.621 20:19:35 -- target/tls.sh@51 -- # hash=01 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # gzip -1 -c 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # head -c 4 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # tail -c8 00:24:37.621 20:19:35 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:24:37.621 20:19:35 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:24:37.621 20:19:35 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:24:37.621 20:19:35 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:37.621 20:19:35 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:37.621 20:19:35 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:37.621 20:19:35 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:37.621 20:19:35 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:37.621 20:19:35 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:37.621 20:19:35 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:37.621 20:19:35 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:37.621 20:19:35 -- target/tls.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:37.621 20:19:35 -- target/tls.sh@140 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:38.192 20:19:35 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:38.192 20:19:35 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:38.192 20:19:35 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:38.192 [2024-04-25 20:19:35.958022] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.192 20:19:35 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:38.451 20:19:36 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:38.451 [2024-04-25 20:19:36.262082] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.451 [2024-04-25 20:19:36.262332] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.451 20:19:36 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:38.709 malloc0 00:24:38.709 20:19:36 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:38.709 20:19:36 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:38.968 20:19:36 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:38.968 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.952 Initializing NVMe Controllers 00:24:48.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:48.952 Initialization complete. Launching workers. 00:24:48.952 ======================================================== 00:24:48.952 Latency(us) 00:24:48.952 Device Information : IOPS MiB/s Average min max 00:24:48.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17795.04 69.51 3596.79 1042.97 5308.93 00:24:48.952 ======================================================== 00:24:48.952 Total : 17795.04 69.51 3596.79 1042.97 5308.93 00:24:48.952 00:24:49.210 20:19:46 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:49.210 20:19:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:49.210 20:19:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:49.210 20:19:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:49.210 20:19:46 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:24:49.210 20:19:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:49.210 20:19:46 -- target/tls.sh@28 -- # bdevperf_pid=1622351 00:24:49.210 20:19:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:49.210 20:19:46 -- target/tls.sh@31 -- # waitforlisten 1622351 /var/tmp/bdevperf.sock 00:24:49.210 20:19:46 -- common/autotest_common.sh@819 -- # '[' -z 1622351 ']' 00:24:49.210 20:19:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.210 20:19:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:49.210 20:19:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.210 20:19:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:49.210 20:19:46 -- common/autotest_common.sh@10 -- # set +x 00:24:49.210 20:19:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:49.210 [2024-04-25 20:19:46.964715] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:49.210 [2024-04-25 20:19:46.964831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622351 ] 00:24:49.210 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.210 [2024-04-25 20:19:47.081375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.470 [2024-04-25 20:19:47.174867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.041 20:19:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:50.041 20:19:47 -- common/autotest_common.sh@852 -- # return 0 00:24:50.041 20:19:47 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:50.041 [2024-04-25 20:19:47.800362] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:50.041 TLSTESTn1 00:24:50.041 20:19:47 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:50.041 Running I/O for 10 seconds... 00:25:00.082 00:25:00.082 Latency(us) 00:25:00.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.082 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:00.082 Verification LBA range: start 0x0 length 0x2000 00:25:00.082 TLSTESTn1 : 10.01 4423.29 17.28 0.00 0.00 28899.09 5725.78 44978.39 00:25:00.082 =================================================================================================================== 00:25:00.082 Total : 4423.29 17.28 0.00 0.00 28899.09 5725.78 44978.39 00:25:00.082 0 00:25:00.082 20:19:57 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:00.082 20:19:57 -- target/tls.sh@45 -- # killprocess 1622351 00:25:00.082 20:19:57 -- common/autotest_common.sh@926 -- # '[' -z 1622351 ']' 00:25:00.082 20:19:57 -- common/autotest_common.sh@930 -- # kill -0 1622351 00:25:00.082 20:19:57 -- common/autotest_common.sh@931 -- # uname 00:25:00.082 20:19:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:00.082 20:19:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1622351 00:25:00.340 20:19:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:00.340 20:19:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:00.340 20:19:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1622351' 00:25:00.340 killing process with pid 1622351 00:25:00.340 20:19:58 -- common/autotest_common.sh@945 -- # kill 1622351 00:25:00.340 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.340 00:25:00.340 Latency(us) 00:25:00.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.340 =================================================================================================================== 00:25:00.340 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.340 20:19:58 -- common/autotest_common.sh@950 -- # wait 1622351 00:25:00.598 20:19:58 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:00.598 20:19:58 -- common/autotest_common.sh@640 -- # local es=0 00:25:00.598 20:19:58 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:00.598 20:19:58 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:00.598 20:19:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:00.598 20:19:58 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:00.598 20:19:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:00.598 20:19:58 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:00.598 20:19:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:00.598 20:19:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:00.598 20:19:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:00.598 20:19:58 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:25:00.598 20:19:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:00.598 20:19:58 -- target/tls.sh@28 -- # bdevperf_pid=1624583 00:25:00.598 20:19:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:00.598 20:19:58 -- target/tls.sh@31 -- # waitforlisten 1624583 /var/tmp/bdevperf.sock 00:25:00.598 20:19:58 -- common/autotest_common.sh@819 -- # '[' -z 1624583 ']' 00:25:00.598 20:19:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.598 20:19:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:00.598 20:19:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.598 20:19:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:00.598 20:19:58 -- common/autotest_common.sh@10 -- # set +x 00:25:00.598 20:19:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:00.598 [2024-04-25 20:19:58.485641] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:00.598 [2024-04-25 20:19:58.485753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624583 ] 00:25:00.856 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.856 [2024-04-25 20:19:58.597377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.856 [2024-04-25 20:19:58.691064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.422 20:19:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:01.422 20:19:59 -- common/autotest_common.sh@852 -- # return 0 00:25:01.422 20:19:59 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:01.422 [2024-04-25 20:19:59.304816] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:01.422 [2024-04-25 20:19:59.312797] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:01.422 [2024-04-25 20:19:59.312958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:25:01.422 [2024-04-25 20:19:59.313935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:25:01.422 [2024-04-25 20:19:59.314930] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.422 [2024-04-25 20:19:59.314949] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:01.422 [2024-04-25 20:19:59.314965] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.422 request: 00:25:01.422 { 00:25:01.422 "name": "TLSTEST", 00:25:01.423 "trtype": "tcp", 00:25:01.423 "traddr": "10.0.0.2", 00:25:01.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.423 "adrfam": "ipv4", 00:25:01.423 "trsvcid": "4420", 00:25:01.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.423 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:25:01.423 "method": "bdev_nvme_attach_controller", 00:25:01.423 "req_id": 1 00:25:01.423 } 00:25:01.423 Got JSON-RPC error response 00:25:01.423 response: 00:25:01.423 { 00:25:01.423 "code": -32602, 00:25:01.423 "message": "Invalid parameters" 00:25:01.423 } 00:25:01.423 20:19:59 -- target/tls.sh@36 -- # killprocess 1624583 00:25:01.423 20:19:59 -- common/autotest_common.sh@926 -- # '[' -z 1624583 ']' 00:25:01.423 20:19:59 -- common/autotest_common.sh@930 -- # kill -0 1624583 00:25:01.423 20:19:59 -- common/autotest_common.sh@931 -- # uname 00:25:01.423 20:19:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:01.423 20:19:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1624583 00:25:01.683 20:19:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:01.683 20:19:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:01.683 20:19:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1624583' 00:25:01.683 killing process with pid 1624583 00:25:01.683 20:19:59 -- common/autotest_common.sh@945 -- # kill 1624583 00:25:01.683 Received shutdown signal, test time was about 10.000000 seconds 00:25:01.683 00:25:01.683 Latency(us) 00:25:01.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.683 =================================================================================================================== 00:25:01.683 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:01.683 20:19:59 -- common/autotest_common.sh@950 -- # wait 1624583 00:25:01.944 20:19:59 -- target/tls.sh@37 -- # return 1 00:25:01.944 20:19:59 -- common/autotest_common.sh@643 -- # es=1 00:25:01.944 20:19:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:01.944 20:19:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:01.944 20:19:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:01.944 20:19:59 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:01.944 20:19:59 -- common/autotest_common.sh@640 -- # local es=0 00:25:01.944 20:19:59 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:01.944 20:19:59 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:01.944 20:19:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.944 20:19:59 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:01.944 20:19:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.944 20:19:59 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:01.944 20:19:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:01.944 20:19:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:01.944 20:19:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:01.944 20:19:59 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:25:01.944 20:19:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:01.944 20:19:59 -- target/tls.sh@28 -- # bdevperf_pid=1624894 00:25:01.944 20:19:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:01.944 20:19:59 -- target/tls.sh@31 -- # waitforlisten 1624894 /var/tmp/bdevperf.sock 00:25:01.944 20:19:59 -- common/autotest_common.sh@819 -- # '[' -z 1624894 ']' 00:25:01.944 20:19:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.944 20:19:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:01.944 20:19:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.944 20:19:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:01.944 20:19:59 -- common/autotest_common.sh@10 -- # set +x 00:25:01.944 20:19:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:01.944 [2024-04-25 20:19:59.845022] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:01.944 [2024-04-25 20:19:59.845165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624894 ] 00:25:02.211 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.211 [2024-04-25 20:19:59.974608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.212 [2024-04-25 20:20:00.079178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.780 20:20:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:02.780 20:20:00 -- common/autotest_common.sh@852 -- # return 0 00:25:02.780 20:20:00 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:02.780 [2024-04-25 20:20:00.692247] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:02.780 [2024-04-25 20:20:00.700342] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:02.780 [2024-04-25 20:20:00.700373] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:02.780 [2024-04-25 20:20:00.700410] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:02.780 [2024-04-25 20:20:00.700918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:25:02.780 [2024-04-25 20:20:00.701895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:25:02.780 [2024-04-25 20:20:00.702894] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.780 [2024-04-25 20:20:00.702909] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:02.780 [2024-04-25 20:20:00.702923] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.780 request: 00:25:02.780 { 00:25:02.780 "name": "TLSTEST", 00:25:02.780 "trtype": "tcp", 00:25:02.780 "traddr": "10.0.0.2", 00:25:02.780 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:02.780 "adrfam": "ipv4", 00:25:02.780 "trsvcid": "4420", 00:25:02.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:02.780 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:25:02.780 "method": "bdev_nvme_attach_controller", 00:25:02.780 "req_id": 1 00:25:02.780 } 00:25:02.780 Got JSON-RPC error response 00:25:02.780 response: 00:25:02.780 { 00:25:02.780 "code": -32602, 00:25:02.780 "message": "Invalid parameters" 00:25:02.780 } 00:25:03.037 20:20:00 -- target/tls.sh@36 -- # killprocess 1624894 00:25:03.037 20:20:00 -- common/autotest_common.sh@926 -- # '[' -z 1624894 ']' 00:25:03.037 20:20:00 -- common/autotest_common.sh@930 -- # kill -0 1624894 00:25:03.037 20:20:00 -- common/autotest_common.sh@931 -- # uname 00:25:03.037 20:20:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:03.037 20:20:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1624894 00:25:03.037 20:20:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:03.037 20:20:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:03.037 20:20:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1624894' 00:25:03.037 killing process with pid 1624894 00:25:03.037 20:20:00 -- common/autotest_common.sh@945 -- # kill 1624894 00:25:03.037 Received shutdown signal, test time was about 10.000000 seconds 00:25:03.037 00:25:03.037 Latency(us) 00:25:03.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.037 =================================================================================================================== 00:25:03.037 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:03.037 20:20:00 -- common/autotest_common.sh@950 -- # wait 1624894 00:25:03.297 20:20:01 -- target/tls.sh@37 -- # return 1 00:25:03.297 20:20:01 -- common/autotest_common.sh@643 -- # es=1 00:25:03.297 20:20:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:03.297 20:20:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:03.297 20:20:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:03.297 20:20:01 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:03.297 20:20:01 -- common/autotest_common.sh@640 -- # local es=0 00:25:03.297 20:20:01 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:03.297 20:20:01 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:03.297 20:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.297 20:20:01 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:03.297 20:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.297 20:20:01 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:03.297 20:20:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:03.297 20:20:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:03.297 20:20:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:03.297 20:20:01 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:25:03.297 20:20:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.297 20:20:01 -- target/tls.sh@28 -- # bdevperf_pid=1625197 00:25:03.297 20:20:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:03.297 20:20:01 -- target/tls.sh@31 -- # waitforlisten 1625197 /var/tmp/bdevperf.sock 00:25:03.297 20:20:01 -- common/autotest_common.sh@819 -- # '[' -z 1625197 ']' 00:25:03.297 20:20:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.297 20:20:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.297 20:20:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.297 20:20:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.297 20:20:01 -- common/autotest_common.sh@10 -- # set +x 00:25:03.297 20:20:01 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:03.297 [2024-04-25 20:20:01.192932] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:03.297 [2024-04-25 20:20:01.193049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625197 ] 00:25:03.556 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.556 [2024-04-25 20:20:01.311116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.556 [2024-04-25 20:20:01.404882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.126 20:20:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:04.126 20:20:01 -- common/autotest_common.sh@852 -- # return 0 00:25:04.126 20:20:01 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:04.126 [2024-04-25 20:20:02.042117] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:04.126 [2024-04-25 20:20:02.050544] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:04.126 [2024-04-25 20:20:02.050570] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:04.126 [2024-04-25 20:20:02.050604] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:04.126 [2024-04-25 20:20:02.051371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:25:04.126 [2024-04-25 20:20:02.052344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:25:04.126 [2024-04-25 20:20:02.053344] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:04.126 [2024-04-25 20:20:02.053361] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:04.126 [2024-04-25 20:20:02.053378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:04.126 request: 00:25:04.126 { 00:25:04.126 "name": "TLSTEST", 00:25:04.126 "trtype": "tcp", 00:25:04.126 "traddr": "10.0.0.2", 00:25:04.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.126 "adrfam": "ipv4", 00:25:04.126 "trsvcid": "4420", 00:25:04.126 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:04.126 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:25:04.126 "method": "bdev_nvme_attach_controller", 00:25:04.126 "req_id": 1 00:25:04.126 } 00:25:04.126 Got JSON-RPC error response 00:25:04.126 response: 00:25:04.126 { 00:25:04.126 "code": -32602, 00:25:04.126 "message": "Invalid parameters" 00:25:04.126 } 00:25:04.385 20:20:02 -- target/tls.sh@36 -- # killprocess 1625197 00:25:04.385 20:20:02 -- common/autotest_common.sh@926 -- # '[' -z 1625197 ']' 00:25:04.385 20:20:02 -- common/autotest_common.sh@930 -- # kill -0 1625197 00:25:04.385 20:20:02 -- common/autotest_common.sh@931 -- # uname 00:25:04.385 20:20:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:04.385 20:20:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1625197 00:25:04.385 20:20:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:04.385 20:20:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:04.385 20:20:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1625197' 00:25:04.385 killing process with pid 1625197 00:25:04.385 20:20:02 -- common/autotest_common.sh@945 -- # kill 1625197 00:25:04.385 Received shutdown signal, test time was about 10.000000 seconds 00:25:04.385 00:25:04.385 Latency(us) 00:25:04.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.385 =================================================================================================================== 00:25:04.385 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:04.385 20:20:02 -- common/autotest_common.sh@950 -- # wait 1625197 00:25:04.642 20:20:02 -- target/tls.sh@37 -- # return 1 00:25:04.642 20:20:02 -- common/autotest_common.sh@643 -- # es=1 00:25:04.642 20:20:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:04.642 20:20:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:04.642 20:20:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:04.642 20:20:02 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:04.642 20:20:02 -- common/autotest_common.sh@640 -- # local es=0 00:25:04.642 20:20:02 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:04.642 20:20:02 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:04.642 20:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.642 20:20:02 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:04.642 20:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.642 20:20:02 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:04.642 20:20:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:04.642 20:20:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:04.642 20:20:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:04.642 20:20:02 -- target/tls.sh@23 -- # psk= 00:25:04.642 20:20:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:04.642 20:20:02 -- target/tls.sh@28 -- # bdevperf_pid=1625509 00:25:04.642 20:20:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:04.642 20:20:02 -- target/tls.sh@31 -- # waitforlisten 1625509 /var/tmp/bdevperf.sock 00:25:04.642 20:20:02 -- common/autotest_common.sh@819 -- # '[' -z 1625509 ']' 00:25:04.642 20:20:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.642 20:20:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:04.642 20:20:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.642 20:20:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:04.642 20:20:02 -- common/autotest_common.sh@10 -- # set +x 00:25:04.642 20:20:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:04.642 [2024-04-25 20:20:02.565907] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:04.642 [2024-04-25 20:20:02.566018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625509 ] 00:25:04.900 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.900 [2024-04-25 20:20:02.676551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.900 [2024-04-25 20:20:02.769772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.466 20:20:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:05.466 20:20:03 -- common/autotest_common.sh@852 -- # return 0 00:25:05.466 20:20:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:05.466 [2024-04-25 20:20:03.390866] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:05.466 [2024-04-25 20:20:03.392503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:25:05.466 [2024-04-25 20:20:03.393496] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.466 [2024-04-25 20:20:03.393513] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:05.466 [2024-04-25 20:20:03.393529] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.466 request: 00:25:05.466 { 00:25:05.466 "name": "TLSTEST", 00:25:05.466 "trtype": "tcp", 00:25:05.466 "traddr": "10.0.0.2", 00:25:05.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:05.466 "adrfam": "ipv4", 00:25:05.466 "trsvcid": "4420", 00:25:05.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.467 "method": "bdev_nvme_attach_controller", 00:25:05.467 "req_id": 1 00:25:05.467 } 00:25:05.467 Got JSON-RPC error response 00:25:05.467 response: 00:25:05.467 { 00:25:05.467 "code": -32602, 00:25:05.467 "message": "Invalid parameters" 00:25:05.467 } 00:25:05.726 20:20:03 -- target/tls.sh@36 -- # killprocess 1625509 00:25:05.726 20:20:03 -- common/autotest_common.sh@926 -- # '[' -z 1625509 ']' 00:25:05.726 20:20:03 -- common/autotest_common.sh@930 -- # kill -0 1625509 00:25:05.726 20:20:03 -- common/autotest_common.sh@931 -- # uname 00:25:05.726 20:20:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:05.726 20:20:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1625509 00:25:05.726 20:20:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:05.726 20:20:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:05.726 20:20:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1625509' 00:25:05.726 killing process with pid 1625509 00:25:05.726 20:20:03 -- common/autotest_common.sh@945 -- # kill 1625509 00:25:05.726 Received shutdown signal, test time was about 10.000000 seconds 00:25:05.726 00:25:05.726 Latency(us) 00:25:05.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.726 =================================================================================================================== 00:25:05.726 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:05.726 20:20:03 -- common/autotest_common.sh@950 -- # wait 1625509 00:25:05.986 20:20:03 -- target/tls.sh@37 -- # return 1 00:25:05.986 20:20:03 -- common/autotest_common.sh@643 -- # es=1 00:25:05.986 20:20:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:05.986 20:20:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:05.986 20:20:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:05.986 20:20:03 -- target/tls.sh@167 -- # killprocess 1619694 00:25:05.986 20:20:03 -- common/autotest_common.sh@926 -- # '[' -z 1619694 ']' 00:25:05.986 20:20:03 -- common/autotest_common.sh@930 -- # kill -0 1619694 00:25:05.986 20:20:03 -- common/autotest_common.sh@931 -- # uname 00:25:05.986 20:20:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:05.986 20:20:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1619694 00:25:05.986 20:20:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:05.986 20:20:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:05.986 20:20:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1619694' 00:25:05.986 killing process with pid 1619694 00:25:05.986 20:20:03 -- common/autotest_common.sh@945 -- # kill 1619694 00:25:05.986 20:20:03 -- common/autotest_common.sh@950 -- # wait 1619694 00:25:06.552 20:20:04 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:25:06.552 20:20:04 -- target/tls.sh@49 -- # local key hash crc 00:25:06.552 20:20:04 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:06.552 20:20:04 -- target/tls.sh@51 -- # hash=02 00:25:06.552 20:20:04 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:25:06.552 20:20:04 -- target/tls.sh@52 -- # gzip -1 -c 00:25:06.552 20:20:04 -- target/tls.sh@52 -- # tail -c8 00:25:06.552 20:20:04 -- target/tls.sh@52 -- # head -c 4 00:25:06.552 20:20:04 -- target/tls.sh@52 -- # crc='�e�'\''' 00:25:06.552 20:20:04 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:25:06.552 20:20:04 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:25:06.552 20:20:04 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:06.552 20:20:04 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:06.552 20:20:04 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:06.552 20:20:04 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:06.552 20:20:04 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:06.552 20:20:04 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:25:06.552 20:20:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:06.552 20:20:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:06.552 20:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:06.552 20:20:04 -- nvmf/common.sh@469 -- # nvmfpid=1625835 00:25:06.553 20:20:04 -- nvmf/common.sh@470 -- # waitforlisten 1625835 00:25:06.553 20:20:04 -- common/autotest_common.sh@819 -- # '[' -z 1625835 ']' 00:25:06.553 20:20:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.553 20:20:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:06.553 20:20:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.553 20:20:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:06.553 20:20:04 -- common/autotest_common.sh@10 -- # set +x 00:25:06.553 20:20:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:06.810 [2024-04-25 20:20:04.517486] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:06.810 [2024-04-25 20:20:04.517593] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.810 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.810 [2024-04-25 20:20:04.639015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.810 [2024-04-25 20:20:04.734115] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:06.810 [2024-04-25 20:20:04.734297] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.810 [2024-04-25 20:20:04.734310] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.810 [2024-04-25 20:20:04.734320] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.810 [2024-04-25 20:20:04.734349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.378 20:20:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:07.378 20:20:05 -- common/autotest_common.sh@852 -- # return 0 00:25:07.378 20:20:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:07.378 20:20:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:07.378 20:20:05 -- common/autotest_common.sh@10 -- # set +x 00:25:07.378 20:20:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.378 20:20:05 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:07.378 20:20:05 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:07.378 20:20:05 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:07.638 [2024-04-25 20:20:05.359262] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.638 20:20:05 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:07.638 20:20:05 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:07.899 [2024-04-25 20:20:05.651322] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:07.899 [2024-04-25 20:20:05.651592] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.899 20:20:05 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:07.899 malloc0 00:25:08.160 20:20:05 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:08.160 20:20:05 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:08.420 20:20:06 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:08.420 20:20:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:08.420 20:20:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:08.420 20:20:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:08.420 20:20:06 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:25:08.420 20:20:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:08.420 20:20:06 -- target/tls.sh@28 -- # bdevperf_pid=1626163 00:25:08.420 20:20:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:08.420 20:20:06 -- target/tls.sh@31 -- # waitforlisten 1626163 /var/tmp/bdevperf.sock 00:25:08.420 20:20:06 -- common/autotest_common.sh@819 -- # '[' -z 1626163 ']' 00:25:08.420 20:20:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.420 20:20:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:08.420 20:20:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.420 20:20:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:08.420 20:20:06 -- common/autotest_common.sh@10 -- # set +x 00:25:08.420 20:20:06 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:08.420 [2024-04-25 20:20:06.201259] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:08.420 [2024-04-25 20:20:06.201371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626163 ] 00:25:08.420 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.420 [2024-04-25 20:20:06.313365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.678 [2024-04-25 20:20:06.407313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.247 20:20:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:09.247 20:20:06 -- common/autotest_common.sh@852 -- # return 0 00:25:09.247 20:20:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:09.247 [2024-04-25 20:20:07.035522] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:09.247 TLSTESTn1 00:25:09.247 20:20:07 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:09.507 Running I/O for 10 seconds... 00:25:19.493 00:25:19.493 Latency(us) 00:25:19.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.493 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:19.493 Verification LBA range: start 0x0 length 0x2000 00:25:19.493 TLSTESTn1 : 10.01 6711.76 26.22 0.00 0.00 19049.44 4691.00 42218.98 00:25:19.493 =================================================================================================================== 00:25:19.493 Total : 6711.76 26.22 0.00 0.00 19049.44 4691.00 42218.98 00:25:19.493 0 00:25:19.493 20:20:17 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:19.493 20:20:17 -- target/tls.sh@45 -- # killprocess 1626163 00:25:19.493 20:20:17 -- common/autotest_common.sh@926 -- # '[' -z 1626163 ']' 00:25:19.493 20:20:17 -- common/autotest_common.sh@930 -- # kill -0 1626163 00:25:19.493 20:20:17 -- common/autotest_common.sh@931 -- # uname 00:25:19.493 20:20:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:19.494 20:20:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1626163 00:25:19.494 20:20:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:19.494 20:20:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:19.494 20:20:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1626163' 00:25:19.494 killing process with pid 1626163 00:25:19.494 20:20:17 -- common/autotest_common.sh@945 -- # kill 1626163 00:25:19.494 Received shutdown signal, test time was about 10.000000 seconds 00:25:19.494 00:25:19.494 Latency(us) 00:25:19.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.494 =================================================================================================================== 00:25:19.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.494 20:20:17 -- common/autotest_common.sh@950 -- # wait 1626163 00:25:19.754 20:20:17 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:19.754 20:20:17 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:19.754 20:20:17 -- common/autotest_common.sh@640 -- # local es=0 00:25:19.754 20:20:17 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:19.754 20:20:17 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:19.754 20:20:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:19.754 20:20:17 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:19.754 20:20:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:19.754 20:20:17 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:19.754 20:20:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:19.754 20:20:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:19.754 20:20:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:19.754 20:20:17 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:25:19.754 20:20:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:19.754 20:20:17 -- target/tls.sh@28 -- # bdevperf_pid=1628403 00:25:19.754 20:20:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:19.754 20:20:17 -- target/tls.sh@31 -- # waitforlisten 1628403 /var/tmp/bdevperf.sock 00:25:19.754 20:20:17 -- common/autotest_common.sh@819 -- # '[' -z 1628403 ']' 00:25:19.754 20:20:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.754 20:20:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:19.754 20:20:17 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:19.754 20:20:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.754 20:20:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:19.754 20:20:17 -- common/autotest_common.sh@10 -- # set +x 00:25:20.016 [2024-04-25 20:20:17.744787] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:20.016 [2024-04-25 20:20:17.744930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1628403 ] 00:25:20.016 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.016 [2024-04-25 20:20:17.876666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.276 [2024-04-25 20:20:17.971689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.535 20:20:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:20.535 20:20:18 -- common/autotest_common.sh@852 -- # return 0 00:25:20.535 20:20:18 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:20.794 [2024-04-25 20:20:18.562547] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:20.794 [2024-04-25 20:20:18.562595] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:20.794 request: 00:25:20.794 { 00:25:20.794 "name": "TLSTEST", 00:25:20.794 "trtype": "tcp", 00:25:20.794 "traddr": "10.0.0.2", 00:25:20.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:20.794 "adrfam": "ipv4", 00:25:20.794 "trsvcid": "4420", 00:25:20.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.794 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:20.794 "method": "bdev_nvme_attach_controller", 00:25:20.794 "req_id": 1 00:25:20.794 } 00:25:20.794 Got JSON-RPC error response 00:25:20.794 response: 00:25:20.794 { 00:25:20.794 "code": -22, 00:25:20.794 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:25:20.794 } 00:25:20.794 20:20:18 -- target/tls.sh@36 -- # killprocess 1628403 00:25:20.794 20:20:18 -- common/autotest_common.sh@926 -- # '[' -z 1628403 ']' 00:25:20.794 20:20:18 -- common/autotest_common.sh@930 -- # kill -0 1628403 00:25:20.794 20:20:18 -- common/autotest_common.sh@931 -- # uname 00:25:20.794 20:20:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:20.794 20:20:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1628403 00:25:20.794 20:20:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:20.794 20:20:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:20.794 20:20:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1628403' 00:25:20.794 killing process with pid 1628403 00:25:20.794 20:20:18 -- common/autotest_common.sh@945 -- # kill 1628403 00:25:20.794 Received shutdown signal, test time was about 10.000000 seconds 00:25:20.794 00:25:20.794 Latency(us) 00:25:20.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.794 =================================================================================================================== 00:25:20.794 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:20.794 20:20:18 -- common/autotest_common.sh@950 -- # wait 1628403 00:25:21.363 20:20:18 -- target/tls.sh@37 -- # return 1 00:25:21.363 20:20:18 -- common/autotest_common.sh@643 -- # es=1 00:25:21.363 20:20:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:21.363 20:20:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:21.363 20:20:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:21.363 20:20:18 -- target/tls.sh@183 -- # killprocess 1625835 00:25:21.363 20:20:18 -- common/autotest_common.sh@926 -- # '[' -z 1625835 ']' 00:25:21.363 20:20:18 -- common/autotest_common.sh@930 -- # kill -0 1625835 00:25:21.363 20:20:18 -- common/autotest_common.sh@931 -- # uname 00:25:21.363 20:20:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:21.363 20:20:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1625835 00:25:21.363 20:20:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:21.363 20:20:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:21.363 20:20:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1625835' 00:25:21.363 killing process with pid 1625835 00:25:21.363 20:20:19 -- common/autotest_common.sh@945 -- # kill 1625835 00:25:21.363 20:20:19 -- common/autotest_common.sh@950 -- # wait 1625835 00:25:21.935 20:20:19 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:25:21.935 20:20:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:21.935 20:20:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:21.935 20:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.935 20:20:19 -- nvmf/common.sh@469 -- # nvmfpid=1628863 00:25:21.935 20:20:19 -- nvmf/common.sh@470 -- # waitforlisten 1628863 00:25:21.935 20:20:19 -- common/autotest_common.sh@819 -- # '[' -z 1628863 ']' 00:25:21.935 20:20:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.935 20:20:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:21.935 20:20:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.935 20:20:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:21.935 20:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.935 20:20:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:21.935 [2024-04-25 20:20:19.677829] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:21.935 [2024-04-25 20:20:19.677965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.935 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.935 [2024-04-25 20:20:19.813426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.194 [2024-04-25 20:20:19.908636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:22.194 [2024-04-25 20:20:19.908851] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.194 [2024-04-25 20:20:19.908865] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.194 [2024-04-25 20:20:19.908875] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.195 [2024-04-25 20:20:19.908904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.484 20:20:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.484 20:20:20 -- common/autotest_common.sh@852 -- # return 0 00:25:22.484 20:20:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:22.484 20:20:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:22.484 20:20:20 -- common/autotest_common.sh@10 -- # set +x 00:25:22.484 20:20:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.484 20:20:20 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:22.484 20:20:20 -- common/autotest_common.sh@640 -- # local es=0 00:25:22.484 20:20:20 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:22.484 20:20:20 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:25:22.484 20:20:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.484 20:20:20 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:25:22.484 20:20:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.484 20:20:20 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:22.484 20:20:20 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:22.484 20:20:20 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:22.763 [2024-04-25 20:20:20.521274] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.763 20:20:20 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:22.764 20:20:20 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:23.021 [2024-04-25 20:20:20.789361] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:23.021 [2024-04-25 20:20:20.789613] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.021 20:20:20 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:23.021 malloc0 00:25:23.281 20:20:20 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:23.281 20:20:21 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:23.281 [2024-04-25 20:20:21.196307] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:23.281 [2024-04-25 20:20:21.196351] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:25:23.281 [2024-04-25 20:20:21.196370] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:25:23.281 request: 00:25:23.281 { 00:25:23.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.281 "host": "nqn.2016-06.io.spdk:host1", 00:25:23.281 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:23.281 "method": "nvmf_subsystem_add_host", 00:25:23.281 "req_id": 1 00:25:23.281 } 00:25:23.281 Got JSON-RPC error response 00:25:23.281 response: 00:25:23.281 { 00:25:23.281 "code": -32603, 00:25:23.281 "message": "Internal error" 00:25:23.281 } 00:25:23.542 20:20:21 -- common/autotest_common.sh@643 -- # es=1 00:25:23.542 20:20:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:23.542 20:20:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:23.542 20:20:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:23.542 20:20:21 -- target/tls.sh@189 -- # killprocess 1628863 00:25:23.542 20:20:21 -- common/autotest_common.sh@926 -- # '[' -z 1628863 ']' 00:25:23.542 20:20:21 -- common/autotest_common.sh@930 -- # kill -0 1628863 00:25:23.542 20:20:21 -- common/autotest_common.sh@931 -- # uname 00:25:23.542 20:20:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:23.542 20:20:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1628863 00:25:23.542 20:20:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:23.542 20:20:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:23.542 20:20:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1628863' 00:25:23.542 killing process with pid 1628863 00:25:23.542 20:20:21 -- common/autotest_common.sh@945 -- # kill 1628863 00:25:23.542 20:20:21 -- common/autotest_common.sh@950 -- # wait 1628863 00:25:24.112 20:20:21 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:24.112 20:20:21 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:25:24.112 20:20:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:24.112 20:20:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:24.112 20:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:24.112 20:20:21 -- nvmf/common.sh@469 -- # nvmfpid=1629206 00:25:24.112 20:20:21 -- nvmf/common.sh@470 -- # waitforlisten 1629206 00:25:24.112 20:20:21 -- common/autotest_common.sh@819 -- # '[' -z 1629206 ']' 00:25:24.112 20:20:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:24.112 20:20:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.112 20:20:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:24.112 20:20:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.112 20:20:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:24.112 20:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:24.112 [2024-04-25 20:20:21.837629] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:24.112 [2024-04-25 20:20:21.837755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.112 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.112 [2024-04-25 20:20:21.964015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.369 [2024-04-25 20:20:22.062722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:24.369 [2024-04-25 20:20:22.062904] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.369 [2024-04-25 20:20:22.062918] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.369 [2024-04-25 20:20:22.062928] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.369 [2024-04-25 20:20:22.062961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.627 20:20:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:24.627 20:20:22 -- common/autotest_common.sh@852 -- # return 0 00:25:24.627 20:20:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:24.627 20:20:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:24.627 20:20:22 -- common/autotest_common.sh@10 -- # set +x 00:25:24.627 20:20:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.628 20:20:22 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:24.628 20:20:22 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:24.628 20:20:22 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:24.886 [2024-04-25 20:20:22.666953] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.886 20:20:22 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:24.886 20:20:22 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:25.144 [2024-04-25 20:20:22.923031] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:25.144 [2024-04-25 20:20:22.923252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.144 20:20:22 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:25.144 malloc0 00:25:25.403 20:20:23 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:25.403 20:20:23 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:25.664 20:20:23 -- target/tls.sh@197 -- # bdevperf_pid=1629534 00:25:25.664 20:20:23 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:25.664 20:20:23 -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:25.664 20:20:23 -- target/tls.sh@200 -- # waitforlisten 1629534 /var/tmp/bdevperf.sock 00:25:25.664 20:20:23 -- common/autotest_common.sh@819 -- # '[' -z 1629534 ']' 00:25:25.664 20:20:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.664 20:20:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:25.664 20:20:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.664 20:20:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:25.664 20:20:23 -- common/autotest_common.sh@10 -- # set +x 00:25:25.664 [2024-04-25 20:20:23.378508] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:25.664 [2024-04-25 20:20:23.378588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1629534 ] 00:25:25.664 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.664 [2024-04-25 20:20:23.464367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.664 [2024-04-25 20:20:23.558798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.234 20:20:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:26.234 20:20:24 -- common/autotest_common.sh@852 -- # return 0 00:25:26.234 20:20:24 -- target/tls.sh@201 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:26.492 [2024-04-25 20:20:24.199286] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:26.492 TLSTESTn1 00:25:26.492 20:20:24 -- target/tls.sh@205 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:25:26.751 20:20:24 -- target/tls.sh@205 -- # tgtconf='{ 00:25:26.751 "subsystems": [ 00:25:26.751 { 00:25:26.751 "subsystem": "iobuf", 00:25:26.751 "config": [ 00:25:26.751 { 00:25:26.751 "method": "iobuf_set_options", 00:25:26.751 "params": { 00:25:26.751 "small_pool_count": 8192, 00:25:26.751 "large_pool_count": 1024, 00:25:26.751 "small_bufsize": 8192, 00:25:26.751 "large_bufsize": 135168 00:25:26.751 } 00:25:26.751 } 00:25:26.751 ] 00:25:26.751 }, 00:25:26.751 { 00:25:26.751 "subsystem": "sock", 00:25:26.751 "config": [ 00:25:26.751 { 00:25:26.751 "method": "sock_impl_set_options", 00:25:26.751 "params": { 00:25:26.751 "impl_name": "posix", 00:25:26.751 "recv_buf_size": 2097152, 00:25:26.751 "send_buf_size": 2097152, 00:25:26.751 "enable_recv_pipe": true, 00:25:26.751 "enable_quickack": false, 00:25:26.751 "enable_placement_id": 0, 00:25:26.751 "enable_zerocopy_send_server": true, 00:25:26.751 "enable_zerocopy_send_client": false, 00:25:26.751 "zerocopy_threshold": 0, 00:25:26.751 "tls_version": 0, 00:25:26.751 "enable_ktls": false 00:25:26.751 } 00:25:26.751 }, 00:25:26.751 { 00:25:26.751 "method": "sock_impl_set_options", 00:25:26.751 "params": { 00:25:26.751 "impl_name": "ssl", 00:25:26.751 "recv_buf_size": 4096, 00:25:26.751 "send_buf_size": 4096, 00:25:26.751 "enable_recv_pipe": true, 00:25:26.751 "enable_quickack": false, 00:25:26.751 "enable_placement_id": 0, 00:25:26.751 "enable_zerocopy_send_server": true, 00:25:26.751 "enable_zerocopy_send_client": false, 00:25:26.751 "zerocopy_threshold": 0, 00:25:26.751 "tls_version": 0, 00:25:26.751 "enable_ktls": false 00:25:26.751 } 00:25:26.751 } 00:25:26.751 ] 00:25:26.751 }, 00:25:26.751 { 00:25:26.751 "subsystem": "vmd", 00:25:26.751 "config": [] 00:25:26.751 }, 00:25:26.751 { 00:25:26.751 "subsystem": "accel", 00:25:26.751 "config": [ 00:25:26.751 { 00:25:26.751 "method": "accel_set_options", 00:25:26.751 "params": { 00:25:26.751 "small_cache_size": 128, 00:25:26.751 "large_cache_size": 16, 00:25:26.751 "task_count": 2048, 00:25:26.751 "sequence_count": 2048, 00:25:26.751 "buf_count": 2048 00:25:26.751 } 00:25:26.751 } 00:25:26.751 ] 00:25:26.751 }, 00:25:26.751 { 00:25:26.751 "subsystem": "bdev", 00:25:26.751 "config": [ 00:25:26.751 { 00:25:26.751 "method": "bdev_set_options", 00:25:26.751 "params": { 00:25:26.751 "bdev_io_pool_size": 65535, 00:25:26.751 "bdev_io_cache_size": 256, 00:25:26.751 "bdev_auto_examine": true, 00:25:26.751 "iobuf_small_cache_size": 128, 00:25:26.751 "iobuf_large_cache_size": 16 00:25:26.751 } 00:25:26.751 }, 00:25:26.751 { 00:25:26.751 "method": "bdev_raid_set_options", 00:25:26.751 "params": { 00:25:26.751 "process_window_size_kb": 1024 00:25:26.751 } 00:25:26.751 }, 00:25:26.751 { 00:25:26.751 "method": "bdev_iscsi_set_options", 00:25:26.751 "params": { 00:25:26.751 "timeout_sec": 30 00:25:26.751 } 00:25:26.751 }, 00:25:26.751 { 00:25:26.751 "method": "bdev_nvme_set_options", 00:25:26.751 "params": { 00:25:26.751 "action_on_timeout": "none", 00:25:26.751 "timeout_us": 0, 00:25:26.751 "timeout_admin_us": 0, 00:25:26.751 "keep_alive_timeout_ms": 10000, 00:25:26.751 "transport_retry_count": 4, 00:25:26.751 "arbitration_burst": 0, 00:25:26.751 "low_priority_weight": 0, 00:25:26.751 "medium_priority_weight": 0, 00:25:26.751 "high_priority_weight": 0, 00:25:26.751 "nvme_adminq_poll_period_us": 10000, 00:25:26.751 "nvme_ioq_poll_period_us": 0, 00:25:26.751 "io_queue_requests": 0, 00:25:26.751 "delay_cmd_submit": true, 00:25:26.751 "bdev_retry_count": 3, 00:25:26.751 "transport_ack_timeout": 0, 00:25:26.751 "ctrlr_loss_timeout_sec": 0, 00:25:26.751 "reconnect_delay_sec": 0, 00:25:26.751 "fast_io_fail_timeout_sec": 0, 00:25:26.751 "generate_uuids": false, 00:25:26.751 "transport_tos": 0, 00:25:26.751 "io_path_stat": false, 00:25:26.751 "allow_accel_sequence": false 00:25:26.751 } 00:25:26.751 }, 00:25:26.751 { 00:25:26.752 "method": "bdev_nvme_set_hotplug", 00:25:26.752 "params": { 00:25:26.752 "period_us": 100000, 00:25:26.752 "enable": false 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "bdev_malloc_create", 00:25:26.752 "params": { 00:25:26.752 "name": "malloc0", 00:25:26.752 "num_blocks": 8192, 00:25:26.752 "block_size": 4096, 00:25:26.752 "physical_block_size": 4096, 00:25:26.752 "uuid": "20687cb1-baeb-4cbe-b98a-e3f5fe72c597", 00:25:26.752 "optimal_io_boundary": 0 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "bdev_wait_for_examine" 00:25:26.752 } 00:25:26.752 ] 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "subsystem": "nbd", 00:25:26.752 "config": [] 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "subsystem": "scheduler", 00:25:26.752 "config": [ 00:25:26.752 { 00:25:26.752 "method": "framework_set_scheduler", 00:25:26.752 "params": { 00:25:26.752 "name": "static" 00:25:26.752 } 00:25:26.752 } 00:25:26.752 ] 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "subsystem": "nvmf", 00:25:26.752 "config": [ 00:25:26.752 { 00:25:26.752 "method": "nvmf_set_config", 00:25:26.752 "params": { 00:25:26.752 "discovery_filter": "match_any", 00:25:26.752 "admin_cmd_passthru": { 00:25:26.752 "identify_ctrlr": false 00:25:26.752 } 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "nvmf_set_max_subsystems", 00:25:26.752 "params": { 00:25:26.752 "max_subsystems": 1024 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "nvmf_set_crdt", 00:25:26.752 "params": { 00:25:26.752 "crdt1": 0, 00:25:26.752 "crdt2": 0, 00:25:26.752 "crdt3": 0 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "nvmf_create_transport", 00:25:26.752 "params": { 00:25:26.752 "trtype": "TCP", 00:25:26.752 "max_queue_depth": 128, 00:25:26.752 "max_io_qpairs_per_ctrlr": 127, 00:25:26.752 "in_capsule_data_size": 4096, 00:25:26.752 "max_io_size": 131072, 00:25:26.752 "io_unit_size": 131072, 00:25:26.752 "max_aq_depth": 128, 00:25:26.752 "num_shared_buffers": 511, 00:25:26.752 "buf_cache_size": 4294967295, 00:25:26.752 "dif_insert_or_strip": false, 00:25:26.752 "zcopy": false, 00:25:26.752 "c2h_success": false, 00:25:26.752 "sock_priority": 0, 00:25:26.752 "abort_timeout_sec": 1 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "nvmf_create_subsystem", 00:25:26.752 "params": { 00:25:26.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.752 "allow_any_host": false, 00:25:26.752 "serial_number": "SPDK00000000000001", 00:25:26.752 "model_number": "SPDK bdev Controller", 00:25:26.752 "max_namespaces": 10, 00:25:26.752 "min_cntlid": 1, 00:25:26.752 "max_cntlid": 65519, 00:25:26.752 "ana_reporting": false 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "nvmf_subsystem_add_host", 00:25:26.752 "params": { 00:25:26.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.752 "host": "nqn.2016-06.io.spdk:host1", 00:25:26.752 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "nvmf_subsystem_add_ns", 00:25:26.752 "params": { 00:25:26.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.752 "namespace": { 00:25:26.752 "nsid": 1, 00:25:26.752 "bdev_name": "malloc0", 00:25:26.752 "nguid": "20687CB1BAEB4CBEB98AE3F5FE72C597", 00:25:26.752 "uuid": "20687cb1-baeb-4cbe-b98a-e3f5fe72c597" 00:25:26.752 } 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "nvmf_subsystem_add_listener", 00:25:26.752 "params": { 00:25:26.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.752 "listen_address": { 00:25:26.752 "trtype": "TCP", 00:25:26.752 "adrfam": "IPv4", 00:25:26.752 "traddr": "10.0.0.2", 00:25:26.752 "trsvcid": "4420" 00:25:26.752 }, 00:25:26.752 "secure_channel": true 00:25:26.752 } 00:25:26.752 } 00:25:26.752 ] 00:25:26.752 } 00:25:26.752 ] 00:25:26.752 }' 00:25:26.752 20:20:24 -- target/tls.sh@206 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:26.752 20:20:24 -- target/tls.sh@206 -- # bdevperfconf='{ 00:25:26.752 "subsystems": [ 00:25:26.752 { 00:25:26.752 "subsystem": "iobuf", 00:25:26.752 "config": [ 00:25:26.752 { 00:25:26.752 "method": "iobuf_set_options", 00:25:26.752 "params": { 00:25:26.752 "small_pool_count": 8192, 00:25:26.752 "large_pool_count": 1024, 00:25:26.752 "small_bufsize": 8192, 00:25:26.752 "large_bufsize": 135168 00:25:26.752 } 00:25:26.752 } 00:25:26.752 ] 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "subsystem": "sock", 00:25:26.752 "config": [ 00:25:26.752 { 00:25:26.752 "method": "sock_impl_set_options", 00:25:26.752 "params": { 00:25:26.752 "impl_name": "posix", 00:25:26.752 "recv_buf_size": 2097152, 00:25:26.752 "send_buf_size": 2097152, 00:25:26.752 "enable_recv_pipe": true, 00:25:26.752 "enable_quickack": false, 00:25:26.752 "enable_placement_id": 0, 00:25:26.752 "enable_zerocopy_send_server": true, 00:25:26.752 "enable_zerocopy_send_client": false, 00:25:26.752 "zerocopy_threshold": 0, 00:25:26.752 "tls_version": 0, 00:25:26.752 "enable_ktls": false 00:25:26.752 } 00:25:26.752 }, 00:25:26.752 { 00:25:26.752 "method": "sock_impl_set_options", 00:25:26.752 "params": { 00:25:26.752 "impl_name": "ssl", 00:25:26.752 "recv_buf_size": 4096, 00:25:26.752 "send_buf_size": 4096, 00:25:26.752 "enable_recv_pipe": true, 00:25:26.752 "enable_quickack": false, 00:25:26.752 "enable_placement_id": 0, 00:25:26.752 "enable_zerocopy_send_server": true, 00:25:26.752 "enable_zerocopy_send_client": false, 00:25:26.752 "zerocopy_threshold": 0, 00:25:26.752 "tls_version": 0, 00:25:26.753 "enable_ktls": false 00:25:26.753 } 00:25:26.753 } 00:25:26.753 ] 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "subsystem": "vmd", 00:25:26.753 "config": [] 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "subsystem": "accel", 00:25:26.753 "config": [ 00:25:26.753 { 00:25:26.753 "method": "accel_set_options", 00:25:26.753 "params": { 00:25:26.753 "small_cache_size": 128, 00:25:26.753 "large_cache_size": 16, 00:25:26.753 "task_count": 2048, 00:25:26.753 "sequence_count": 2048, 00:25:26.753 "buf_count": 2048 00:25:26.753 } 00:25:26.753 } 00:25:26.753 ] 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "subsystem": "bdev", 00:25:26.753 "config": [ 00:25:26.753 { 00:25:26.753 "method": "bdev_set_options", 00:25:26.753 "params": { 00:25:26.753 "bdev_io_pool_size": 65535, 00:25:26.753 "bdev_io_cache_size": 256, 00:25:26.753 "bdev_auto_examine": true, 00:25:26.753 "iobuf_small_cache_size": 128, 00:25:26.753 "iobuf_large_cache_size": 16 00:25:26.753 } 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "method": "bdev_raid_set_options", 00:25:26.753 "params": { 00:25:26.753 "process_window_size_kb": 1024 00:25:26.753 } 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "method": "bdev_iscsi_set_options", 00:25:26.753 "params": { 00:25:26.753 "timeout_sec": 30 00:25:26.753 } 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "method": "bdev_nvme_set_options", 00:25:26.753 "params": { 00:25:26.753 "action_on_timeout": "none", 00:25:26.753 "timeout_us": 0, 00:25:26.753 "timeout_admin_us": 0, 00:25:26.753 "keep_alive_timeout_ms": 10000, 00:25:26.753 "transport_retry_count": 4, 00:25:26.753 "arbitration_burst": 0, 00:25:26.753 "low_priority_weight": 0, 00:25:26.753 "medium_priority_weight": 0, 00:25:26.753 "high_priority_weight": 0, 00:25:26.753 "nvme_adminq_poll_period_us": 10000, 00:25:26.753 "nvme_ioq_poll_period_us": 0, 00:25:26.753 "io_queue_requests": 512, 00:25:26.753 "delay_cmd_submit": true, 00:25:26.753 "bdev_retry_count": 3, 00:25:26.753 "transport_ack_timeout": 0, 00:25:26.753 "ctrlr_loss_timeout_sec": 0, 00:25:26.753 "reconnect_delay_sec": 0, 00:25:26.753 "fast_io_fail_timeout_sec": 0, 00:25:26.753 "generate_uuids": false, 00:25:26.753 "transport_tos": 0, 00:25:26.753 "io_path_stat": false, 00:25:26.753 "allow_accel_sequence": false 00:25:26.753 } 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "method": "bdev_nvme_attach_controller", 00:25:26.753 "params": { 00:25:26.753 "name": "TLSTEST", 00:25:26.753 "trtype": "TCP", 00:25:26.753 "adrfam": "IPv4", 00:25:26.753 "traddr": "10.0.0.2", 00:25:26.753 "trsvcid": "4420", 00:25:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.753 "prchk_reftag": false, 00:25:26.753 "prchk_guard": false, 00:25:26.753 "ctrlr_loss_timeout_sec": 0, 00:25:26.753 "reconnect_delay_sec": 0, 00:25:26.753 "fast_io_fail_timeout_sec": 0, 00:25:26.753 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:26.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:26.753 "hdgst": false, 00:25:26.753 "ddgst": false 00:25:26.753 } 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "method": "bdev_nvme_set_hotplug", 00:25:26.753 "params": { 00:25:26.753 "period_us": 100000, 00:25:26.753 "enable": false 00:25:26.753 } 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "method": "bdev_wait_for_examine" 00:25:26.753 } 00:25:26.753 ] 00:25:26.753 }, 00:25:26.753 { 00:25:26.753 "subsystem": "nbd", 00:25:26.753 "config": [] 00:25:26.753 } 00:25:26.753 ] 00:25:26.753 }' 00:25:26.753 20:20:24 -- target/tls.sh@208 -- # killprocess 1629534 00:25:26.753 20:20:24 -- common/autotest_common.sh@926 -- # '[' -z 1629534 ']' 00:25:26.753 20:20:24 -- common/autotest_common.sh@930 -- # kill -0 1629534 00:25:26.753 20:20:24 -- common/autotest_common.sh@931 -- # uname 00:25:26.753 20:20:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:26.753 20:20:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1629534 00:25:27.011 20:20:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:27.011 20:20:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:27.011 20:20:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1629534' 00:25:27.011 killing process with pid 1629534 00:25:27.011 20:20:24 -- common/autotest_common.sh@945 -- # kill 1629534 00:25:27.011 Received shutdown signal, test time was about 10.000000 seconds 00:25:27.011 00:25:27.012 Latency(us) 00:25:27.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.012 =================================================================================================================== 00:25:27.012 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:27.012 20:20:24 -- common/autotest_common.sh@950 -- # wait 1629534 00:25:27.271 20:20:25 -- target/tls.sh@209 -- # killprocess 1629206 00:25:27.271 20:20:25 -- common/autotest_common.sh@926 -- # '[' -z 1629206 ']' 00:25:27.271 20:20:25 -- common/autotest_common.sh@930 -- # kill -0 1629206 00:25:27.271 20:20:25 -- common/autotest_common.sh@931 -- # uname 00:25:27.271 20:20:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:27.271 20:20:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1629206 00:25:27.271 20:20:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:27.271 20:20:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:27.271 20:20:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1629206' 00:25:27.272 killing process with pid 1629206 00:25:27.272 20:20:25 -- common/autotest_common.sh@945 -- # kill 1629206 00:25:27.272 20:20:25 -- common/autotest_common.sh@950 -- # wait 1629206 00:25:27.843 20:20:25 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:27.843 20:20:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:27.843 20:20:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:27.843 20:20:25 -- common/autotest_common.sh@10 -- # set +x 00:25:27.843 20:20:25 -- target/tls.sh@212 -- # echo '{ 00:25:27.843 "subsystems": [ 00:25:27.843 { 00:25:27.843 "subsystem": "iobuf", 00:25:27.843 "config": [ 00:25:27.843 { 00:25:27.843 "method": "iobuf_set_options", 00:25:27.843 "params": { 00:25:27.843 "small_pool_count": 8192, 00:25:27.843 "large_pool_count": 1024, 00:25:27.843 "small_bufsize": 8192, 00:25:27.843 "large_bufsize": 135168 00:25:27.843 } 00:25:27.843 } 00:25:27.843 ] 00:25:27.843 }, 00:25:27.843 { 00:25:27.843 "subsystem": "sock", 00:25:27.843 "config": [ 00:25:27.843 { 00:25:27.843 "method": "sock_impl_set_options", 00:25:27.843 "params": { 00:25:27.843 "impl_name": "posix", 00:25:27.843 "recv_buf_size": 2097152, 00:25:27.843 "send_buf_size": 2097152, 00:25:27.843 "enable_recv_pipe": true, 00:25:27.843 "enable_quickack": false, 00:25:27.843 "enable_placement_id": 0, 00:25:27.843 "enable_zerocopy_send_server": true, 00:25:27.843 "enable_zerocopy_send_client": false, 00:25:27.843 "zerocopy_threshold": 0, 00:25:27.843 "tls_version": 0, 00:25:27.843 "enable_ktls": false 00:25:27.843 } 00:25:27.843 }, 00:25:27.843 { 00:25:27.843 "method": "sock_impl_set_options", 00:25:27.843 "params": { 00:25:27.843 "impl_name": "ssl", 00:25:27.843 "recv_buf_size": 4096, 00:25:27.843 "send_buf_size": 4096, 00:25:27.843 "enable_recv_pipe": true, 00:25:27.843 "enable_quickack": false, 00:25:27.843 "enable_placement_id": 0, 00:25:27.843 "enable_zerocopy_send_server": true, 00:25:27.843 "enable_zerocopy_send_client": false, 00:25:27.843 "zerocopy_threshold": 0, 00:25:27.843 "tls_version": 0, 00:25:27.843 "enable_ktls": false 00:25:27.843 } 00:25:27.843 } 00:25:27.843 ] 00:25:27.843 }, 00:25:27.843 { 00:25:27.843 "subsystem": "vmd", 00:25:27.843 "config": [] 00:25:27.843 }, 00:25:27.843 { 00:25:27.843 "subsystem": "accel", 00:25:27.843 "config": [ 00:25:27.843 { 00:25:27.843 "method": "accel_set_options", 00:25:27.843 "params": { 00:25:27.843 "small_cache_size": 128, 00:25:27.843 "large_cache_size": 16, 00:25:27.844 "task_count": 2048, 00:25:27.844 "sequence_count": 2048, 00:25:27.844 "buf_count": 2048 00:25:27.844 } 00:25:27.844 } 00:25:27.844 ] 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "subsystem": "bdev", 00:25:27.844 "config": [ 00:25:27.844 { 00:25:27.844 "method": "bdev_set_options", 00:25:27.844 "params": { 00:25:27.844 "bdev_io_pool_size": 65535, 00:25:27.844 "bdev_io_cache_size": 256, 00:25:27.844 "bdev_auto_examine": true, 00:25:27.844 "iobuf_small_cache_size": 128, 00:25:27.844 "iobuf_large_cache_size": 16 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "bdev_raid_set_options", 00:25:27.844 "params": { 00:25:27.844 "process_window_size_kb": 1024 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "bdev_iscsi_set_options", 00:25:27.844 "params": { 00:25:27.844 "timeout_sec": 30 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "bdev_nvme_set_options", 00:25:27.844 "params": { 00:25:27.844 "action_on_timeout": "none", 00:25:27.844 "timeout_us": 0, 00:25:27.844 "timeout_admin_us": 0, 00:25:27.844 "keep_alive_timeout_ms": 10000, 00:25:27.844 "transport_retry_count": 4, 00:25:27.844 "arbitration_burst": 0, 00:25:27.844 "low_priority_weight": 0, 00:25:27.844 "medium_priority_weight": 0, 00:25:27.844 "high_priority_weight": 0, 00:25:27.844 "nvme_adminq_poll_period_us": 10000, 00:25:27.844 "nvme_ioq_poll_period_us": 0, 00:25:27.844 "io_queue_requests": 0, 00:25:27.844 "delay_cmd_submit": true, 00:25:27.844 "bdev_retry_count": 3, 00:25:27.844 "transport_ack_timeout": 0, 00:25:27.844 "ctrlr_loss_timeout_sec": 0, 00:25:27.844 "reconnect_delay_sec": 0, 00:25:27.844 "fast_io_fail_timeout_sec": 0, 00:25:27.844 "generate_uuids": false, 00:25:27.844 "transport_tos": 0, 00:25:27.844 "io_path_stat": false, 00:25:27.844 "allow_accel_sequence": false 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "bdev_nvme_set_hotplug", 00:25:27.844 "params": { 00:25:27.844 "period_us": 100000, 00:25:27.844 "enable": false 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "bdev_malloc_create", 00:25:27.844 "params": { 00:25:27.844 "name": "malloc0", 00:25:27.844 "num_blocks": 8192, 00:25:27.844 "block_size": 4096, 00:25:27.844 "physical_block_size": 4096, 00:25:27.844 "uuid": "20687cb1-baeb-4cbe-b98a-e3f5fe72c597", 00:25:27.844 "optimal_io_boundary": 0 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "bdev_wait_for_examine" 00:25:27.844 } 00:25:27.844 ] 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "subsystem": "nbd", 00:25:27.844 "config": [] 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "subsystem": "scheduler", 00:25:27.844 "config": [ 00:25:27.844 { 00:25:27.844 "method": "framework_set_scheduler", 00:25:27.844 "params": { 00:25:27.844 "name": "static" 00:25:27.844 } 00:25:27.844 } 00:25:27.844 ] 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "subsystem": "nvmf", 00:25:27.844 "config": [ 00:25:27.844 { 00:25:27.844 "method": "nvmf_set_config", 00:25:27.844 "params": { 00:25:27.844 "discovery_filter": "match_any", 00:25:27.844 "admin_cmd_passthru": { 00:25:27.844 "identify_ctrlr": false 00:25:27.844 } 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "nvmf_set_max_subsystems", 00:25:27.844 "params": { 00:25:27.844 "max_subsystems": 1024 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "nvmf_set_crdt", 00:25:27.844 "params": { 00:25:27.844 "crdt1": 0, 00:25:27.844 "crdt2": 0, 00:25:27.844 "crdt3": 0 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "nvmf_create_transport", 00:25:27.844 "params": { 00:25:27.844 "trtype": "TCP", 00:25:27.844 "max_queue_depth": 128, 00:25:27.844 "max_io_qpairs_per_ctrlr": 127, 00:25:27.844 "in_capsule_data_size": 4096, 00:25:27.844 "max_io_size": 131072, 00:25:27.844 "io_unit_size": 131072, 00:25:27.844 "max_aq_depth": 128, 00:25:27.844 "num_shared_buffers": 511, 00:25:27.844 "buf_cache_size": 4294967295, 00:25:27.844 "dif_insert_or_strip": false, 00:25:27.844 "zcopy": false, 00:25:27.844 "c2h_success": false, 00:25:27.844 "sock_priority": 0, 00:25:27.844 "abort_timeout_sec": 1 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "nvmf_create_subsystem", 00:25:27.844 "params": { 00:25:27.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.844 "allow_any_host": false, 00:25:27.844 "serial_number": "SPDK00000000000001", 00:25:27.844 "model_number": "SPDK bdev Controller", 00:25:27.844 "max_namespaces": 10, 00:25:27.844 "min_cntlid": 1, 00:25:27.844 "max_cntlid": 65519, 00:25:27.844 "ana_reporting": false 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "nvmf_subsystem_add_host", 00:25:27.844 "params": { 00:25:27.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.844 "host": "nqn.2016-06.io.spdk:host1", 00:25:27.844 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "nvmf_subsystem_add_ns", 00:25:27.844 "params": { 00:25:27.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.844 "namespace": { 00:25:27.844 "nsid": 1, 00:25:27.844 "bdev_name": "malloc0", 00:25:27.844 "nguid": "20687CB1BAEB4CBEB98AE3F5FE72C597", 00:25:27.844 "uuid": "20687cb1-baeb-4cbe-b98a-e3f5fe72c597" 00:25:27.844 } 00:25:27.844 } 00:25:27.844 }, 00:25:27.844 { 00:25:27.844 "method": "nvmf_subsystem_add_listener", 00:25:27.844 "params": { 00:25:27.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.844 "listen_address": { 00:25:27.844 "trtype": "TCP", 00:25:27.844 "adrfam": "IPv4", 00:25:27.844 "traddr": "10.0.0.2", 00:25:27.844 "trsvcid": "4420" 00:25:27.844 }, 00:25:27.844 "secure_channel": true 00:25:27.844 } 00:25:27.844 } 00:25:27.844 ] 00:25:27.844 } 00:25:27.844 ] 00:25:27.844 }' 00:25:27.844 20:20:25 -- nvmf/common.sh@469 -- # nvmfpid=1630082 00:25:27.844 20:20:25 -- nvmf/common.sh@470 -- # waitforlisten 1630082 00:25:27.844 20:20:25 -- common/autotest_common.sh@819 -- # '[' -z 1630082 ']' 00:25:27.844 20:20:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.844 20:20:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:27.844 20:20:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.844 20:20:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:27.844 20:20:25 -- common/autotest_common.sh@10 -- # set +x 00:25:27.844 20:20:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:27.844 [2024-04-25 20:20:25.683451] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:27.844 [2024-04-25 20:20:25.683581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.844 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.105 [2024-04-25 20:20:25.810149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.105 [2024-04-25 20:20:25.906705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:28.105 [2024-04-25 20:20:25.906887] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.105 [2024-04-25 20:20:25.906902] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.105 [2024-04-25 20:20:25.906911] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.105 [2024-04-25 20:20:25.906942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.363 [2024-04-25 20:20:26.190548] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.363 [2024-04-25 20:20:26.231477] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.363 [2024-04-25 20:20:26.231701] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.621 20:20:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:28.621 20:20:26 -- common/autotest_common.sh@852 -- # return 0 00:25:28.621 20:20:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:28.621 20:20:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:28.621 20:20:26 -- common/autotest_common.sh@10 -- # set +x 00:25:28.621 20:20:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.621 20:20:26 -- target/tls.sh@216 -- # bdevperf_pid=1630174 00:25:28.621 20:20:26 -- target/tls.sh@217 -- # waitforlisten 1630174 /var/tmp/bdevperf.sock 00:25:28.621 20:20:26 -- common/autotest_common.sh@819 -- # '[' -z 1630174 ']' 00:25:28.621 20:20:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.621 20:20:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:28.621 20:20:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.621 20:20:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:28.621 20:20:26 -- common/autotest_common.sh@10 -- # set +x 00:25:28.621 20:20:26 -- target/tls.sh@213 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:28.621 20:20:26 -- target/tls.sh@213 -- # echo '{ 00:25:28.621 "subsystems": [ 00:25:28.621 { 00:25:28.621 "subsystem": "iobuf", 00:25:28.621 "config": [ 00:25:28.621 { 00:25:28.621 "method": "iobuf_set_options", 00:25:28.621 "params": { 00:25:28.621 "small_pool_count": 8192, 00:25:28.621 "large_pool_count": 1024, 00:25:28.621 "small_bufsize": 8192, 00:25:28.621 "large_bufsize": 135168 00:25:28.621 } 00:25:28.621 } 00:25:28.621 ] 00:25:28.621 }, 00:25:28.621 { 00:25:28.621 "subsystem": "sock", 00:25:28.621 "config": [ 00:25:28.621 { 00:25:28.621 "method": "sock_impl_set_options", 00:25:28.621 "params": { 00:25:28.621 "impl_name": "posix", 00:25:28.621 "recv_buf_size": 2097152, 00:25:28.621 "send_buf_size": 2097152, 00:25:28.621 "enable_recv_pipe": true, 00:25:28.621 "enable_quickack": false, 00:25:28.621 "enable_placement_id": 0, 00:25:28.621 "enable_zerocopy_send_server": true, 00:25:28.622 "enable_zerocopy_send_client": false, 00:25:28.622 "zerocopy_threshold": 0, 00:25:28.622 "tls_version": 0, 00:25:28.622 "enable_ktls": false 00:25:28.622 } 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "method": "sock_impl_set_options", 00:25:28.622 "params": { 00:25:28.622 "impl_name": "ssl", 00:25:28.622 "recv_buf_size": 4096, 00:25:28.622 "send_buf_size": 4096, 00:25:28.622 "enable_recv_pipe": true, 00:25:28.622 "enable_quickack": false, 00:25:28.622 "enable_placement_id": 0, 00:25:28.622 "enable_zerocopy_send_server": true, 00:25:28.622 "enable_zerocopy_send_client": false, 00:25:28.622 "zerocopy_threshold": 0, 00:25:28.622 "tls_version": 0, 00:25:28.622 "enable_ktls": false 00:25:28.622 } 00:25:28.622 } 00:25:28.622 ] 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "subsystem": "vmd", 00:25:28.622 "config": [] 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "subsystem": "accel", 00:25:28.622 "config": [ 00:25:28.622 { 00:25:28.622 "method": "accel_set_options", 00:25:28.622 "params": { 00:25:28.622 "small_cache_size": 128, 00:25:28.622 "large_cache_size": 16, 00:25:28.622 "task_count": 2048, 00:25:28.622 "sequence_count": 2048, 00:25:28.622 "buf_count": 2048 00:25:28.622 } 00:25:28.622 } 00:25:28.622 ] 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "subsystem": "bdev", 00:25:28.622 "config": [ 00:25:28.622 { 00:25:28.622 "method": "bdev_set_options", 00:25:28.622 "params": { 00:25:28.622 "bdev_io_pool_size": 65535, 00:25:28.622 "bdev_io_cache_size": 256, 00:25:28.622 "bdev_auto_examine": true, 00:25:28.622 "iobuf_small_cache_size": 128, 00:25:28.622 "iobuf_large_cache_size": 16 00:25:28.622 } 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "method": "bdev_raid_set_options", 00:25:28.622 "params": { 00:25:28.622 "process_window_size_kb": 1024 00:25:28.622 } 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "method": "bdev_iscsi_set_options", 00:25:28.622 "params": { 00:25:28.622 "timeout_sec": 30 00:25:28.622 } 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "method": "bdev_nvme_set_options", 00:25:28.622 "params": { 00:25:28.622 "action_on_timeout": "none", 00:25:28.622 "timeout_us": 0, 00:25:28.622 "timeout_admin_us": 0, 00:25:28.622 "keep_alive_timeout_ms": 10000, 00:25:28.622 "transport_retry_count": 4, 00:25:28.622 "arbitration_burst": 0, 00:25:28.622 "low_priority_weight": 0, 00:25:28.622 "medium_priority_weight": 0, 00:25:28.622 "high_priority_weight": 0, 00:25:28.622 "nvme_adminq_poll_period_us": 10000, 00:25:28.622 "nvme_ioq_poll_period_us": 0, 00:25:28.622 "io_queue_requests": 512, 00:25:28.622 "delay_cmd_submit": true, 00:25:28.622 "bdev_retry_count": 3, 00:25:28.622 "transport_ack_timeout": 0, 00:25:28.622 "ctrlr_loss_timeout_sec": 0, 00:25:28.622 "reconnect_delay_sec": 0, 00:25:28.622 "fast_io_fail_timeout_sec": 0, 00:25:28.622 "generate_uuids": false, 00:25:28.622 "transport_tos": 0, 00:25:28.622 "io_path_stat": false, 00:25:28.622 "allow_accel_sequence": false 00:25:28.622 } 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "method": "bdev_nvme_attach_controller", 00:25:28.622 "params": { 00:25:28.622 "name": "TLSTEST", 00:25:28.622 "trtype": "TCP", 00:25:28.622 "adrfam": "IPv4", 00:25:28.622 "traddr": "10.0.0.2", 00:25:28.622 "trsvcid": "4420", 00:25:28.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.622 "prchk_reftag": false, 00:25:28.622 "prchk_guard": false, 00:25:28.622 "ctrlr_loss_timeout_sec": 0, 00:25:28.622 "reconnect_delay_sec": 0, 00:25:28.622 "fast_io_fail_timeout_sec": 0, 00:25:28.622 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:28.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:28.622 "hdgst": false, 00:25:28.622 "ddgst": false 00:25:28.622 } 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "method": "bdev_nvme_set_hotplug", 00:25:28.622 "params": { 00:25:28.622 "period_us": 100000, 00:25:28.622 "enable": false 00:25:28.622 } 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "method": "bdev_wait_for_examine" 00:25:28.622 } 00:25:28.622 ] 00:25:28.622 }, 00:25:28.622 { 00:25:28.622 "subsystem": "nbd", 00:25:28.622 "config": [] 00:25:28.622 } 00:25:28.622 ] 00:25:28.622 }' 00:25:28.622 [2024-04-25 20:20:26.477258] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:28.622 [2024-04-25 20:20:26.477364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1630174 ] 00:25:28.622 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.880 [2024-04-25 20:20:26.587444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.880 [2024-04-25 20:20:26.681144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.137 [2024-04-25 20:20:26.889632] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:29.397 20:20:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:29.397 20:20:27 -- common/autotest_common.sh@852 -- # return 0 00:25:29.397 20:20:27 -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:29.397 Running I/O for 10 seconds... 00:25:39.384 00:25:39.384 Latency(us) 00:25:39.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.384 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:39.384 Verification LBA range: start 0x0 length 0x2000 00:25:39.384 TLSTESTn1 : 10.01 6323.75 24.70 0.00 0.00 20221.37 3656.22 42494.92 00:25:39.384 =================================================================================================================== 00:25:39.384 Total : 6323.75 24.70 0.00 0.00 20221.37 3656.22 42494.92 00:25:39.384 0 00:25:39.384 20:20:37 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:39.384 20:20:37 -- target/tls.sh@223 -- # killprocess 1630174 00:25:39.384 20:20:37 -- common/autotest_common.sh@926 -- # '[' -z 1630174 ']' 00:25:39.384 20:20:37 -- common/autotest_common.sh@930 -- # kill -0 1630174 00:25:39.384 20:20:37 -- common/autotest_common.sh@931 -- # uname 00:25:39.384 20:20:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:39.384 20:20:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1630174 00:25:39.384 20:20:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:39.384 20:20:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:39.384 20:20:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1630174' 00:25:39.384 killing process with pid 1630174 00:25:39.384 20:20:37 -- common/autotest_common.sh@945 -- # kill 1630174 00:25:39.384 Received shutdown signal, test time was about 10.000000 seconds 00:25:39.384 00:25:39.384 Latency(us) 00:25:39.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.384 =================================================================================================================== 00:25:39.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.384 20:20:37 -- common/autotest_common.sh@950 -- # wait 1630174 00:25:39.954 20:20:37 -- target/tls.sh@224 -- # killprocess 1630082 00:25:39.954 20:20:37 -- common/autotest_common.sh@926 -- # '[' -z 1630082 ']' 00:25:39.954 20:20:37 -- common/autotest_common.sh@930 -- # kill -0 1630082 00:25:39.954 20:20:37 -- common/autotest_common.sh@931 -- # uname 00:25:39.954 20:20:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:39.954 20:20:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1630082 00:25:39.954 20:20:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:39.954 20:20:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:39.954 20:20:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1630082' 00:25:39.954 killing process with pid 1630082 00:25:39.954 20:20:37 -- common/autotest_common.sh@945 -- # kill 1630082 00:25:39.954 20:20:37 -- common/autotest_common.sh@950 -- # wait 1630082 00:25:40.524 20:20:38 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:25:40.524 20:20:38 -- target/tls.sh@227 -- # cleanup 00:25:40.524 20:20:38 -- target/tls.sh@15 -- # process_shm --id 0 00:25:40.524 20:20:38 -- common/autotest_common.sh@796 -- # type=--id 00:25:40.524 20:20:38 -- common/autotest_common.sh@797 -- # id=0 00:25:40.524 20:20:38 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:25:40.524 20:20:38 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:40.524 20:20:38 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:25:40.524 20:20:38 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:25:40.524 20:20:38 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:25:40.524 20:20:38 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:40.524 nvmf_trace.0 00:25:40.524 20:20:38 -- common/autotest_common.sh@811 -- # return 0 00:25:40.524 20:20:38 -- target/tls.sh@16 -- # killprocess 1630174 00:25:40.524 20:20:38 -- common/autotest_common.sh@926 -- # '[' -z 1630174 ']' 00:25:40.524 20:20:38 -- common/autotest_common.sh@930 -- # kill -0 1630174 00:25:40.524 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1630174) - No such process 00:25:40.524 20:20:38 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1630174 is not found' 00:25:40.524 Process with pid 1630174 is not found 00:25:40.524 20:20:38 -- target/tls.sh@17 -- # nvmftestfini 00:25:40.524 20:20:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:40.524 20:20:38 -- nvmf/common.sh@116 -- # sync 00:25:40.524 20:20:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:40.524 20:20:38 -- nvmf/common.sh@119 -- # set +e 00:25:40.524 20:20:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:40.524 20:20:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:40.524 rmmod nvme_tcp 00:25:40.524 rmmod nvme_fabrics 00:25:40.524 rmmod nvme_keyring 00:25:40.524 20:20:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:40.524 20:20:38 -- nvmf/common.sh@123 -- # set -e 00:25:40.524 20:20:38 -- nvmf/common.sh@124 -- # return 0 00:25:40.524 20:20:38 -- nvmf/common.sh@477 -- # '[' -n 1630082 ']' 00:25:40.524 20:20:38 -- nvmf/common.sh@478 -- # killprocess 1630082 00:25:40.524 20:20:38 -- common/autotest_common.sh@926 -- # '[' -z 1630082 ']' 00:25:40.525 20:20:38 -- common/autotest_common.sh@930 -- # kill -0 1630082 00:25:40.525 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1630082) - No such process 00:25:40.525 20:20:38 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1630082 is not found' 00:25:40.525 Process with pid 1630082 is not found 00:25:40.525 20:20:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:40.525 20:20:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:40.525 20:20:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:40.525 20:20:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.525 20:20:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:40.525 20:20:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.525 20:20:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.525 20:20:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.060 20:20:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:43.060 20:20:40 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:43.060 00:25:43.060 real 1m12.623s 00:25:43.060 user 1m47.964s 00:25:43.060 sys 0m20.673s 00:25:43.060 20:20:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.060 20:20:40 -- common/autotest_common.sh@10 -- # set +x 00:25:43.060 ************************************ 00:25:43.060 END TEST nvmf_tls 00:25:43.060 ************************************ 00:25:43.060 20:20:40 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:43.060 20:20:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:43.060 20:20:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.060 20:20:40 -- common/autotest_common.sh@10 -- # set +x 00:25:43.060 ************************************ 00:25:43.060 START TEST nvmf_fips 00:25:43.060 ************************************ 00:25:43.060 20:20:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:43.060 * Looking for test storage... 00:25:43.060 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:25:43.060 20:20:40 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.060 20:20:40 -- nvmf/common.sh@7 -- # uname -s 00:25:43.060 20:20:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.060 20:20:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.060 20:20:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.060 20:20:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.060 20:20:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.060 20:20:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.060 20:20:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.060 20:20:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.060 20:20:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.060 20:20:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.060 20:20:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:43.060 20:20:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:43.060 20:20:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.060 20:20:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.060 20:20:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:43.060 20:20:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:43.060 20:20:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.060 20:20:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.060 20:20:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.060 20:20:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.060 20:20:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.060 20:20:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.060 20:20:40 -- paths/export.sh@5 -- # export PATH 00:25:43.060 20:20:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.060 20:20:40 -- nvmf/common.sh@46 -- # : 0 00:25:43.060 20:20:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:43.060 20:20:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:43.060 20:20:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:43.060 20:20:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.060 20:20:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.060 20:20:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:43.060 20:20:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:43.060 20:20:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:43.060 20:20:40 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:25:43.060 20:20:40 -- fips/fips.sh@89 -- # check_openssl_version 00:25:43.060 20:20:40 -- fips/fips.sh@83 -- # local target=3.0.0 00:25:43.060 20:20:40 -- fips/fips.sh@85 -- # openssl version 00:25:43.060 20:20:40 -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:43.060 20:20:40 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:43.060 20:20:40 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:43.060 20:20:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:43.060 20:20:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:43.060 20:20:40 -- scripts/common.sh@335 -- # IFS=.-: 00:25:43.060 20:20:40 -- scripts/common.sh@335 -- # read -ra ver1 00:25:43.060 20:20:40 -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.060 20:20:40 -- scripts/common.sh@336 -- # read -ra ver2 00:25:43.060 20:20:40 -- scripts/common.sh@337 -- # local 'op=>=' 00:25:43.060 20:20:40 -- scripts/common.sh@339 -- # ver1_l=3 00:25:43.060 20:20:40 -- scripts/common.sh@340 -- # ver2_l=3 00:25:43.060 20:20:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:43.060 20:20:40 -- scripts/common.sh@343 -- # case "$op" in 00:25:43.060 20:20:40 -- scripts/common.sh@347 -- # : 1 00:25:43.060 20:20:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:43.060 20:20:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.060 20:20:40 -- scripts/common.sh@364 -- # decimal 3 00:25:43.060 20:20:40 -- scripts/common.sh@352 -- # local d=3 00:25:43.060 20:20:40 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:43.060 20:20:40 -- scripts/common.sh@354 -- # echo 3 00:25:43.060 20:20:40 -- scripts/common.sh@364 -- # ver1[v]=3 00:25:43.060 20:20:40 -- scripts/common.sh@365 -- # decimal 3 00:25:43.060 20:20:40 -- scripts/common.sh@352 -- # local d=3 00:25:43.060 20:20:40 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:43.060 20:20:40 -- scripts/common.sh@354 -- # echo 3 00:25:43.060 20:20:40 -- scripts/common.sh@365 -- # ver2[v]=3 00:25:43.060 20:20:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:43.060 20:20:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:43.060 20:20:40 -- scripts/common.sh@363 -- # (( v++ )) 00:25:43.060 20:20:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.060 20:20:40 -- scripts/common.sh@364 -- # decimal 0 00:25:43.060 20:20:40 -- scripts/common.sh@352 -- # local d=0 00:25:43.060 20:20:40 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:43.060 20:20:40 -- scripts/common.sh@354 -- # echo 0 00:25:43.060 20:20:40 -- scripts/common.sh@364 -- # ver1[v]=0 00:25:43.060 20:20:40 -- scripts/common.sh@365 -- # decimal 0 00:25:43.060 20:20:40 -- scripts/common.sh@352 -- # local d=0 00:25:43.060 20:20:40 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:43.060 20:20:40 -- scripts/common.sh@354 -- # echo 0 00:25:43.060 20:20:40 -- scripts/common.sh@365 -- # ver2[v]=0 00:25:43.060 20:20:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:43.060 20:20:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:43.060 20:20:40 -- scripts/common.sh@363 -- # (( v++ )) 00:25:43.060 20:20:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.060 20:20:40 -- scripts/common.sh@364 -- # decimal 9 00:25:43.060 20:20:40 -- scripts/common.sh@352 -- # local d=9 00:25:43.060 20:20:40 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:43.060 20:20:40 -- scripts/common.sh@354 -- # echo 9 00:25:43.061 20:20:40 -- scripts/common.sh@364 -- # ver1[v]=9 00:25:43.061 20:20:40 -- scripts/common.sh@365 -- # decimal 0 00:25:43.061 20:20:40 -- scripts/common.sh@352 -- # local d=0 00:25:43.061 20:20:40 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:43.061 20:20:40 -- scripts/common.sh@354 -- # echo 0 00:25:43.061 20:20:40 -- scripts/common.sh@365 -- # ver2[v]=0 00:25:43.061 20:20:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:43.061 20:20:40 -- scripts/common.sh@366 -- # return 0 00:25:43.061 20:20:40 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:43.061 20:20:40 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:43.061 20:20:40 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:43.061 20:20:40 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:43.061 20:20:40 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:43.061 20:20:40 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:43.061 20:20:40 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:43.061 20:20:40 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:25:43.061 20:20:40 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:25:43.061 20:20:40 -- fips/fips.sh@114 -- # build_openssl_config 00:25:43.061 20:20:40 -- fips/fips.sh@37 -- # cat 00:25:43.061 20:20:40 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:43.061 20:20:40 -- fips/fips.sh@58 -- # cat - 00:25:43.061 20:20:40 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:43.061 20:20:40 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:43.061 20:20:40 -- fips/fips.sh@117 -- # mapfile -t providers 00:25:43.061 20:20:40 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:25:43.061 20:20:40 -- fips/fips.sh@117 -- # openssl list -providers 00:25:43.061 20:20:40 -- fips/fips.sh@117 -- # grep name 00:25:43.061 20:20:40 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:43.061 20:20:40 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:43.061 20:20:40 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:43.061 20:20:40 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:43.061 20:20:40 -- common/autotest_common.sh@640 -- # local es=0 00:25:43.061 20:20:40 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:43.061 20:20:40 -- common/autotest_common.sh@628 -- # local arg=openssl 00:25:43.061 20:20:40 -- fips/fips.sh@128 -- # : 00:25:43.061 20:20:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:43.061 20:20:40 -- common/autotest_common.sh@632 -- # type -t openssl 00:25:43.061 20:20:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:43.061 20:20:40 -- common/autotest_common.sh@634 -- # type -P openssl 00:25:43.061 20:20:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:43.061 20:20:40 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:25:43.061 20:20:40 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:25:43.061 20:20:40 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:25:43.061 Error setting digest 00:25:43.061 00F29252567F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:43.061 00F29252567F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:43.061 20:20:40 -- common/autotest_common.sh@643 -- # es=1 00:25:43.061 20:20:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:43.061 20:20:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:43.061 20:20:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:43.061 20:20:40 -- fips/fips.sh@131 -- # nvmftestinit 00:25:43.061 20:20:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:43.061 20:20:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.061 20:20:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:43.061 20:20:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:43.061 20:20:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:43.061 20:20:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.061 20:20:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.061 20:20:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.061 20:20:40 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:25:43.061 20:20:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:43.061 20:20:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:43.061 20:20:40 -- common/autotest_common.sh@10 -- # set +x 00:25:48.337 20:20:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:48.337 20:20:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:48.337 20:20:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:48.337 20:20:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:48.337 20:20:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:48.337 20:20:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:48.337 20:20:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:48.337 20:20:45 -- nvmf/common.sh@294 -- # net_devs=() 00:25:48.337 20:20:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:48.337 20:20:45 -- nvmf/common.sh@295 -- # e810=() 00:25:48.337 20:20:45 -- nvmf/common.sh@295 -- # local -ga e810 00:25:48.337 20:20:45 -- nvmf/common.sh@296 -- # x722=() 00:25:48.337 20:20:45 -- nvmf/common.sh@296 -- # local -ga x722 00:25:48.337 20:20:45 -- nvmf/common.sh@297 -- # mlx=() 00:25:48.337 20:20:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:48.337 20:20:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.337 20:20:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:48.337 20:20:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:48.337 20:20:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.337 20:20:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:48.337 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:48.337 20:20:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.337 20:20:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:48.337 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:48.337 20:20:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:48.337 20:20:45 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.337 20:20:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.337 20:20:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.337 20:20:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.337 20:20:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:48.337 Found net devices under 0000:27:00.0: cvl_0_0 00:25:48.337 20:20:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.337 20:20:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.337 20:20:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.337 20:20:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.337 20:20:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.337 20:20:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:48.337 Found net devices under 0000:27:00.1: cvl_0_1 00:25:48.337 20:20:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.337 20:20:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:48.337 20:20:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:48.337 20:20:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:48.337 20:20:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.337 20:20:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.337 20:20:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.337 20:20:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:48.337 20:20:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.337 20:20:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.337 20:20:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:48.337 20:20:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.337 20:20:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.337 20:20:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:48.337 20:20:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:48.337 20:20:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.337 20:20:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.337 20:20:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.337 20:20:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.337 20:20:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:48.337 20:20:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.337 20:20:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.337 20:20:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.337 20:20:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:48.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:25:48.337 00:25:48.337 --- 10.0.0.2 ping statistics --- 00:25:48.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.337 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:25:48.337 20:20:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:25:48.337 00:25:48.337 --- 10.0.0.1 ping statistics --- 00:25:48.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.337 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:25:48.337 20:20:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.337 20:20:45 -- nvmf/common.sh@410 -- # return 0 00:25:48.337 20:20:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:48.337 20:20:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.337 20:20:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:48.337 20:20:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.337 20:20:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:48.337 20:20:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:48.337 20:20:46 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:48.337 20:20:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:48.337 20:20:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:48.337 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:25:48.337 20:20:46 -- nvmf/common.sh@469 -- # nvmfpid=1636470 00:25:48.337 20:20:46 -- nvmf/common.sh@470 -- # waitforlisten 1636470 00:25:48.337 20:20:46 -- common/autotest_common.sh@819 -- # '[' -z 1636470 ']' 00:25:48.337 20:20:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.337 20:20:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:48.337 20:20:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.337 20:20:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:48.337 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:25:48.337 20:20:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:48.337 [2024-04-25 20:20:46.135063] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:48.337 [2024-04-25 20:20:46.135178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.337 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.337 [2024-04-25 20:20:46.257291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.596 [2024-04-25 20:20:46.348680] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:48.596 [2024-04-25 20:20:46.348872] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.596 [2024-04-25 20:20:46.348887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.596 [2024-04-25 20:20:46.348899] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.596 [2024-04-25 20:20:46.348934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.162 20:20:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:49.162 20:20:46 -- common/autotest_common.sh@852 -- # return 0 00:25:49.162 20:20:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:49.162 20:20:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:49.162 20:20:46 -- common/autotest_common.sh@10 -- # set +x 00:25:49.162 20:20:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.162 20:20:46 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:49.162 20:20:46 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:49.162 20:20:46 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:49.162 20:20:46 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:49.162 20:20:46 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:49.162 20:20:46 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:49.162 20:20:46 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:49.162 20:20:46 -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:25:49.162 [2024-04-25 20:20:46.950468] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.162 [2024-04-25 20:20:46.966433] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:49.162 [2024-04-25 20:20:46.966640] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.162 malloc0 00:25:49.163 20:20:47 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:49.163 20:20:47 -- fips/fips.sh@148 -- # bdevperf_pid=1636769 00:25:49.163 20:20:47 -- fips/fips.sh@146 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:49.163 20:20:47 -- fips/fips.sh@149 -- # waitforlisten 1636769 /var/tmp/bdevperf.sock 00:25:49.163 20:20:47 -- common/autotest_common.sh@819 -- # '[' -z 1636769 ']' 00:25:49.163 20:20:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:49.163 20:20:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:49.163 20:20:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:49.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:49.163 20:20:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:49.163 20:20:47 -- common/autotest_common.sh@10 -- # set +x 00:25:49.422 [2024-04-25 20:20:47.144424] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:49.422 [2024-04-25 20:20:47.144547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636769 ] 00:25:49.422 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.422 [2024-04-25 20:20:47.254850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.422 [2024-04-25 20:20:47.349783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.046 20:20:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:50.047 20:20:47 -- common/autotest_common.sh@852 -- # return 0 00:25:50.047 20:20:47 -- fips/fips.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:50.047 [2024-04-25 20:20:47.963689] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:50.305 TLSTESTn1 00:25:50.305 20:20:48 -- fips/fips.sh@155 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:50.305 Running I/O for 10 seconds... 00:26:00.292 00:26:00.292 Latency(us) 00:26:00.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.292 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:00.292 Verification LBA range: start 0x0 length 0x2000 00:26:00.292 TLSTESTn1 : 10.02 6495.60 25.37 0.00 0.00 19682.02 3156.08 42770.86 00:26:00.292 =================================================================================================================== 00:26:00.292 Total : 6495.60 25.37 0.00 0.00 19682.02 3156.08 42770.86 00:26:00.292 0 00:26:00.292 20:20:58 -- fips/fips.sh@1 -- # cleanup 00:26:00.292 20:20:58 -- fips/fips.sh@15 -- # process_shm --id 0 00:26:00.292 20:20:58 -- common/autotest_common.sh@796 -- # type=--id 00:26:00.292 20:20:58 -- common/autotest_common.sh@797 -- # id=0 00:26:00.292 20:20:58 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:26:00.292 20:20:58 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:00.292 20:20:58 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:26:00.292 20:20:58 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:26:00.292 20:20:58 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:26:00.292 20:20:58 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:00.292 nvmf_trace.0 00:26:00.553 20:20:58 -- common/autotest_common.sh@811 -- # return 0 00:26:00.553 20:20:58 -- fips/fips.sh@16 -- # killprocess 1636769 00:26:00.553 20:20:58 -- common/autotest_common.sh@926 -- # '[' -z 1636769 ']' 00:26:00.553 20:20:58 -- common/autotest_common.sh@930 -- # kill -0 1636769 00:26:00.553 20:20:58 -- common/autotest_common.sh@931 -- # uname 00:26:00.553 20:20:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:00.553 20:20:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1636769 00:26:00.553 20:20:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:00.553 20:20:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:00.553 20:20:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1636769' 00:26:00.553 killing process with pid 1636769 00:26:00.553 20:20:58 -- common/autotest_common.sh@945 -- # kill 1636769 00:26:00.553 Received shutdown signal, test time was about 10.000000 seconds 00:26:00.553 00:26:00.553 Latency(us) 00:26:00.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.553 =================================================================================================================== 00:26:00.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.553 20:20:58 -- common/autotest_common.sh@950 -- # wait 1636769 00:26:00.813 20:20:58 -- fips/fips.sh@17 -- # nvmftestfini 00:26:00.814 20:20:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:00.814 20:20:58 -- nvmf/common.sh@116 -- # sync 00:26:00.814 20:20:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:00.814 20:20:58 -- nvmf/common.sh@119 -- # set +e 00:26:00.814 20:20:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:00.814 20:20:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:00.814 rmmod nvme_tcp 00:26:00.814 rmmod nvme_fabrics 00:26:00.814 rmmod nvme_keyring 00:26:00.814 20:20:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:00.814 20:20:58 -- nvmf/common.sh@123 -- # set -e 00:26:00.814 20:20:58 -- nvmf/common.sh@124 -- # return 0 00:26:00.814 20:20:58 -- nvmf/common.sh@477 -- # '[' -n 1636470 ']' 00:26:00.814 20:20:58 -- nvmf/common.sh@478 -- # killprocess 1636470 00:26:00.814 20:20:58 -- common/autotest_common.sh@926 -- # '[' -z 1636470 ']' 00:26:00.814 20:20:58 -- common/autotest_common.sh@930 -- # kill -0 1636470 00:26:00.814 20:20:58 -- common/autotest_common.sh@931 -- # uname 00:26:00.814 20:20:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:01.073 20:20:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1636470 00:26:01.073 20:20:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:01.073 20:20:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:01.073 20:20:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1636470' 00:26:01.073 killing process with pid 1636470 00:26:01.073 20:20:58 -- common/autotest_common.sh@945 -- # kill 1636470 00:26:01.073 20:20:58 -- common/autotest_common.sh@950 -- # wait 1636470 00:26:01.639 20:20:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:01.639 20:20:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:01.639 20:20:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:01.639 20:20:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:01.639 20:20:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:01.639 20:20:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.639 20:20:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:01.639 20:20:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.546 20:21:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:03.546 20:21:01 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:03.546 00:26:03.546 real 0m20.947s 00:26:03.546 user 0m24.367s 00:26:03.546 sys 0m7.136s 00:26:03.546 20:21:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.546 20:21:01 -- common/autotest_common.sh@10 -- # set +x 00:26:03.546 ************************************ 00:26:03.546 END TEST nvmf_fips 00:26:03.546 ************************************ 00:26:03.546 20:21:01 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:26:03.546 20:21:01 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:03.546 20:21:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:03.546 20:21:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:03.546 20:21:01 -- common/autotest_common.sh@10 -- # set +x 00:26:03.546 ************************************ 00:26:03.546 START TEST nvmf_fuzz 00:26:03.546 ************************************ 00:26:03.546 20:21:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:03.804 * Looking for test storage... 00:26:03.804 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:26:03.804 20:21:01 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.804 20:21:01 -- nvmf/common.sh@7 -- # uname -s 00:26:03.804 20:21:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.804 20:21:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.804 20:21:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.804 20:21:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.804 20:21:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.804 20:21:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.804 20:21:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.804 20:21:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.804 20:21:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.804 20:21:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.804 20:21:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:03.804 20:21:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:03.804 20:21:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.804 20:21:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.804 20:21:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:03.804 20:21:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:03.804 20:21:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.804 20:21:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.804 20:21:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.804 20:21:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.804 20:21:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.804 20:21:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.804 20:21:01 -- paths/export.sh@5 -- # export PATH 00:26:03.804 20:21:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.804 20:21:01 -- nvmf/common.sh@46 -- # : 0 00:26:03.804 20:21:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:03.804 20:21:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:03.804 20:21:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:03.804 20:21:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.804 20:21:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.804 20:21:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:03.804 20:21:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:03.804 20:21:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:03.804 20:21:01 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:03.804 20:21:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:03.804 20:21:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.804 20:21:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:03.804 20:21:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:03.804 20:21:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:03.804 20:21:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.805 20:21:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.805 20:21:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.805 20:21:01 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:26:03.805 20:21:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:03.805 20:21:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:03.805 20:21:01 -- common/autotest_common.sh@10 -- # set +x 00:26:09.080 20:21:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:09.080 20:21:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:09.080 20:21:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:09.080 20:21:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:09.080 20:21:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:09.080 20:21:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:09.080 20:21:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:09.080 20:21:06 -- nvmf/common.sh@294 -- # net_devs=() 00:26:09.080 20:21:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:09.080 20:21:06 -- nvmf/common.sh@295 -- # e810=() 00:26:09.080 20:21:06 -- nvmf/common.sh@295 -- # local -ga e810 00:26:09.080 20:21:06 -- nvmf/common.sh@296 -- # x722=() 00:26:09.080 20:21:06 -- nvmf/common.sh@296 -- # local -ga x722 00:26:09.080 20:21:06 -- nvmf/common.sh@297 -- # mlx=() 00:26:09.080 20:21:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:09.080 20:21:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.080 20:21:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:09.080 20:21:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:09.080 20:21:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:09.080 20:21:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:09.080 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:09.080 20:21:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:09.080 20:21:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:09.080 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:09.080 20:21:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:09.080 20:21:06 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:09.080 20:21:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.080 20:21:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:09.080 20:21:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.080 20:21:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:09.080 Found net devices under 0000:27:00.0: cvl_0_0 00:26:09.080 20:21:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.080 20:21:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:09.080 20:21:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.080 20:21:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:09.080 20:21:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.080 20:21:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:09.080 Found net devices under 0000:27:00.1: cvl_0_1 00:26:09.080 20:21:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.080 20:21:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:09.080 20:21:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:09.080 20:21:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:09.080 20:21:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:09.080 20:21:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.080 20:21:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.080 20:21:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.080 20:21:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:09.080 20:21:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.080 20:21:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.080 20:21:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:09.080 20:21:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.080 20:21:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.080 20:21:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:09.080 20:21:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:09.080 20:21:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.080 20:21:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.080 20:21:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.080 20:21:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.080 20:21:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:09.080 20:21:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.080 20:21:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.080 20:21:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.080 20:21:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:09.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:26:09.080 00:26:09.080 --- 10.0.0.2 ping statistics --- 00:26:09.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.080 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:09.080 20:21:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:26:09.080 00:26:09.080 --- 10.0.0.1 ping statistics --- 00:26:09.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.081 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:26:09.081 20:21:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.081 20:21:06 -- nvmf/common.sh@410 -- # return 0 00:26:09.081 20:21:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:09.081 20:21:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.081 20:21:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:09.081 20:21:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:09.081 20:21:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.081 20:21:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:09.081 20:21:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:09.081 20:21:06 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1643380 00:26:09.081 20:21:06 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:09.081 20:21:06 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1643380 00:26:09.081 20:21:06 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:09.081 20:21:06 -- common/autotest_common.sh@819 -- # '[' -z 1643380 ']' 00:26:09.081 20:21:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.081 20:21:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:09.081 20:21:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.081 20:21:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:09.081 20:21:06 -- common/autotest_common.sh@10 -- # set +x 00:26:09.646 20:21:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:09.647 20:21:07 -- common/autotest_common.sh@852 -- # return 0 00:26:09.647 20:21:07 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.647 20:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.647 20:21:07 -- common/autotest_common.sh@10 -- # set +x 00:26:09.647 20:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.647 20:21:07 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:09.647 20:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.647 20:21:07 -- common/autotest_common.sh@10 -- # set +x 00:26:09.647 Malloc0 00:26:09.647 20:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.647 20:21:07 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:09.647 20:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.647 20:21:07 -- common/autotest_common.sh@10 -- # set +x 00:26:09.647 20:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.647 20:21:07 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:09.647 20:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.647 20:21:07 -- common/autotest_common.sh@10 -- # set +x 00:26:09.647 20:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.647 20:21:07 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.647 20:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.647 20:21:07 -- common/autotest_common.sh@10 -- # set +x 00:26:09.647 20:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.647 20:21:07 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:09.647 20:21:07 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:41.781 Fuzzing completed. Shutting down the fuzz application 00:26:41.781 00:26:41.781 Dumping successful admin opcodes: 00:26:41.781 8, 9, 10, 24, 00:26:41.781 Dumping successful io opcodes: 00:26:41.781 0, 9, 00:26:41.781 NS: 0x200003aefec0 I/O qp, Total commands completed: 802083, total successful commands: 4665, random_seed: 4197465344 00:26:41.781 NS: 0x200003aefec0 admin qp, Total commands completed: 76080, total successful commands: 593, random_seed: 623768960 00:26:41.781 20:21:37 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:41.781 Fuzzing completed. Shutting down the fuzz application 00:26:41.781 00:26:41.781 Dumping successful admin opcodes: 00:26:41.781 24, 00:26:41.781 Dumping successful io opcodes: 00:26:41.781 00:26:41.781 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3647581379 00:26:41.781 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3647672233 00:26:41.781 20:21:39 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:41.781 20:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.781 20:21:39 -- common/autotest_common.sh@10 -- # set +x 00:26:41.781 20:21:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.781 20:21:39 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:41.781 20:21:39 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:41.781 20:21:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:41.781 20:21:39 -- nvmf/common.sh@116 -- # sync 00:26:41.781 20:21:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:41.781 20:21:39 -- nvmf/common.sh@119 -- # set +e 00:26:41.781 20:21:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:41.781 20:21:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:41.781 rmmod nvme_tcp 00:26:41.781 rmmod nvme_fabrics 00:26:41.781 rmmod nvme_keyring 00:26:41.781 20:21:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:41.781 20:21:39 -- nvmf/common.sh@123 -- # set -e 00:26:41.781 20:21:39 -- nvmf/common.sh@124 -- # return 0 00:26:41.781 20:21:39 -- nvmf/common.sh@477 -- # '[' -n 1643380 ']' 00:26:41.781 20:21:39 -- nvmf/common.sh@478 -- # killprocess 1643380 00:26:41.781 20:21:39 -- common/autotest_common.sh@926 -- # '[' -z 1643380 ']' 00:26:41.781 20:21:39 -- common/autotest_common.sh@930 -- # kill -0 1643380 00:26:41.781 20:21:39 -- common/autotest_common.sh@931 -- # uname 00:26:41.781 20:21:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:41.781 20:21:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1643380 00:26:41.781 20:21:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:41.781 20:21:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:41.781 20:21:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1643380' 00:26:41.781 killing process with pid 1643380 00:26:41.781 20:21:39 -- common/autotest_common.sh@945 -- # kill 1643380 00:26:41.781 20:21:39 -- common/autotest_common.sh@950 -- # wait 1643380 00:26:42.349 20:21:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:42.349 20:21:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:42.349 20:21:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:42.349 20:21:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.349 20:21:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:42.349 20:21:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.349 20:21:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.349 20:21:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.256 20:21:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:44.256 20:21:42 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:44.514 00:26:44.514 real 0m40.753s 00:26:44.514 user 0m57.798s 00:26:44.514 sys 0m12.534s 00:26:44.514 20:21:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.514 20:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:44.514 ************************************ 00:26:44.514 END TEST nvmf_fuzz 00:26:44.514 ************************************ 00:26:44.514 20:21:42 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:44.514 20:21:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:44.514 20:21:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:44.514 20:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:44.514 ************************************ 00:26:44.514 START TEST nvmf_multiconnection 00:26:44.514 ************************************ 00:26:44.514 20:21:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:44.514 * Looking for test storage... 00:26:44.514 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:26:44.514 20:21:42 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.514 20:21:42 -- nvmf/common.sh@7 -- # uname -s 00:26:44.514 20:21:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.514 20:21:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.514 20:21:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.514 20:21:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.514 20:21:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.514 20:21:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.514 20:21:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.514 20:21:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.514 20:21:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.514 20:21:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.514 20:21:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:44.514 20:21:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:44.514 20:21:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.514 20:21:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.514 20:21:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:44.514 20:21:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:44.514 20:21:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.514 20:21:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.514 20:21:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.514 20:21:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.514 20:21:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.515 20:21:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.515 20:21:42 -- paths/export.sh@5 -- # export PATH 00:26:44.515 20:21:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.515 20:21:42 -- nvmf/common.sh@46 -- # : 0 00:26:44.515 20:21:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:44.515 20:21:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:44.515 20:21:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:44.515 20:21:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.515 20:21:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.515 20:21:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:44.515 20:21:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:44.515 20:21:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:44.515 20:21:42 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:44.515 20:21:42 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:44.515 20:21:42 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:44.515 20:21:42 -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:44.515 20:21:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:44.515 20:21:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.515 20:21:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:44.515 20:21:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:44.515 20:21:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:44.515 20:21:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.515 20:21:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.515 20:21:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.515 20:21:42 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:26:44.515 20:21:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:44.515 20:21:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:44.515 20:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:49.845 20:21:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:49.845 20:21:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:49.845 20:21:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:49.845 20:21:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:49.845 20:21:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:49.845 20:21:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:49.845 20:21:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:49.845 20:21:47 -- nvmf/common.sh@294 -- # net_devs=() 00:26:49.845 20:21:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:49.845 20:21:47 -- nvmf/common.sh@295 -- # e810=() 00:26:49.845 20:21:47 -- nvmf/common.sh@295 -- # local -ga e810 00:26:49.845 20:21:47 -- nvmf/common.sh@296 -- # x722=() 00:26:49.845 20:21:47 -- nvmf/common.sh@296 -- # local -ga x722 00:26:49.845 20:21:47 -- nvmf/common.sh@297 -- # mlx=() 00:26:49.845 20:21:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:49.845 20:21:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.845 20:21:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:49.845 20:21:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:49.845 20:21:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:49.845 20:21:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:49.845 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:49.845 20:21:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:49.845 20:21:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:49.845 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:49.845 20:21:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:49.845 20:21:47 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:49.845 20:21:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.845 20:21:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:49.845 20:21:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.845 20:21:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:49.845 Found net devices under 0000:27:00.0: cvl_0_0 00:26:49.845 20:21:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.845 20:21:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:49.845 20:21:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.845 20:21:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:49.845 20:21:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.845 20:21:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:49.845 Found net devices under 0000:27:00.1: cvl_0_1 00:26:49.845 20:21:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.845 20:21:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:49.845 20:21:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:49.845 20:21:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:49.845 20:21:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:49.845 20:21:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.845 20:21:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.845 20:21:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.845 20:21:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:49.845 20:21:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.845 20:21:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.845 20:21:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:49.845 20:21:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.845 20:21:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.845 20:21:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:49.845 20:21:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:49.845 20:21:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.846 20:21:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.846 20:21:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.846 20:21:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.846 20:21:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:49.846 20:21:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.846 20:21:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.846 20:21:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.846 20:21:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:49.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:26:49.846 00:26:49.846 --- 10.0.0.2 ping statistics --- 00:26:49.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.846 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:26:49.846 20:21:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:26:49.846 00:26:49.846 --- 10.0.0.1 ping statistics --- 00:26:49.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.846 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:26:49.846 20:21:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.846 20:21:47 -- nvmf/common.sh@410 -- # return 0 00:26:49.846 20:21:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:49.846 20:21:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.846 20:21:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:49.846 20:21:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:49.846 20:21:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.846 20:21:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:49.846 20:21:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:49.846 20:21:47 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:49.846 20:21:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:49.846 20:21:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:49.846 20:21:47 -- common/autotest_common.sh@10 -- # set +x 00:26:49.846 20:21:47 -- nvmf/common.sh@469 -- # nvmfpid=1654054 00:26:49.846 20:21:47 -- nvmf/common.sh@470 -- # waitforlisten 1654054 00:26:49.846 20:21:47 -- common/autotest_common.sh@819 -- # '[' -z 1654054 ']' 00:26:49.846 20:21:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.846 20:21:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:49.846 20:21:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.846 20:21:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:49.846 20:21:47 -- common/autotest_common.sh@10 -- # set +x 00:26:49.846 20:21:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:49.846 [2024-04-25 20:21:47.714736] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:49.846 [2024-04-25 20:21:47.714844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.104 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.104 [2024-04-25 20:21:47.833502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.104 [2024-04-25 20:21:47.926336] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:50.104 [2024-04-25 20:21:47.926506] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.104 [2024-04-25 20:21:47.926519] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.104 [2024-04-25 20:21:47.926528] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.104 [2024-04-25 20:21:47.926602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.104 [2024-04-25 20:21:47.926698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.104 [2024-04-25 20:21:47.926799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.104 [2024-04-25 20:21:47.926810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.671 20:21:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:50.671 20:21:48 -- common/autotest_common.sh@852 -- # return 0 00:26:50.671 20:21:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:50.671 20:21:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 20:21:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.671 20:21:48 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 [2024-04-25 20:21:48.443673] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.671 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.671 20:21:48 -- target/multiconnection.sh@21 -- # seq 1 11 00:26:50.671 20:21:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.671 20:21:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 Malloc1 00:26:50.671 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.671 20:21:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.671 20:21:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.671 20:21:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 [2024-04-25 20:21:48.515812] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.671 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.671 20:21:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.671 20:21:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 Malloc2 00:26:50.671 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.671 20:21:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.671 20:21:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.671 20:21:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.671 20:21:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.671 20:21:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:50.671 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.671 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.932 Malloc3 00:26:50.932 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.932 20:21:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:50.932 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.932 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.932 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.932 20:21:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:50.932 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.932 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.932 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.932 20:21:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:50.932 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.932 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.932 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.932 20:21:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.933 20:21:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 Malloc4 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.933 20:21:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 Malloc5 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.933 20:21:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 Malloc6 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.933 20:21:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 Malloc7 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.933 20:21:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:50.933 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.933 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.194 20:21:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:51.194 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 Malloc8 00:26:51.194 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:51.194 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:51.194 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:51.194 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:48 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.194 20:21:48 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:51.194 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 Malloc9 00:26:51.194 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:48 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:51.194 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:48 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:51.194 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:48 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:51.194 20:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:48 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.194 20:21:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:51.194 20:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 Malloc10 00:26:51.194 20:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:51.194 20:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:51.194 20:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:51.194 20:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:49 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.194 20:21:49 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:51.194 20:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 Malloc11 00:26:51.194 20:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:49 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:51.194 20:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.194 20:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:51.194 20:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.194 20:21:49 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:51.195 20:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.195 20:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:51.195 20:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.195 20:21:49 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:51.195 20:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.195 20:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:51.195 20:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.195 20:21:49 -- target/multiconnection.sh@28 -- # seq 1 11 00:26:51.195 20:21:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.195 20:21:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:53.099 20:21:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:53.099 20:21:50 -- common/autotest_common.sh@1177 -- # local i=0 00:26:53.099 20:21:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.099 20:21:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:53.099 20:21:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:55.006 20:21:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:55.006 20:21:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:55.006 20:21:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:26:55.006 20:21:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:55.006 20:21:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.006 20:21:52 -- common/autotest_common.sh@1187 -- # return 0 00:26:55.006 20:21:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.006 20:21:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:56.379 20:21:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:56.379 20:21:53 -- common/autotest_common.sh@1177 -- # local i=0 00:26:56.379 20:21:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:56.379 20:21:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:56.379 20:21:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:58.281 20:21:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:58.281 20:21:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:58.281 20:21:56 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:26:58.281 20:21:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:58.282 20:21:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:58.282 20:21:56 -- common/autotest_common.sh@1187 -- # return 0 00:26:58.282 20:21:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.282 20:21:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:00.186 20:21:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:00.186 20:21:57 -- common/autotest_common.sh@1177 -- # local i=0 00:27:00.186 20:21:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:00.186 20:21:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:00.186 20:21:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:02.090 20:21:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:02.090 20:21:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:02.090 20:21:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:27:02.090 20:21:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:02.090 20:21:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:02.090 20:21:59 -- common/autotest_common.sh@1187 -- # return 0 00:27:02.090 20:21:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:02.090 20:21:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:03.466 20:22:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:03.466 20:22:01 -- common/autotest_common.sh@1177 -- # local i=0 00:27:03.466 20:22:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:03.466 20:22:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:03.466 20:22:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:05.370 20:22:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:05.370 20:22:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:05.370 20:22:03 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:27:05.370 20:22:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:05.370 20:22:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:05.370 20:22:03 -- common/autotest_common.sh@1187 -- # return 0 00:27:05.370 20:22:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:05.370 20:22:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:07.271 20:22:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:07.271 20:22:04 -- common/autotest_common.sh@1177 -- # local i=0 00:27:07.271 20:22:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:07.271 20:22:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:07.271 20:22:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:09.174 20:22:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:09.174 20:22:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:09.174 20:22:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:27:09.174 20:22:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:09.174 20:22:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:09.174 20:22:06 -- common/autotest_common.sh@1187 -- # return 0 00:27:09.174 20:22:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:09.174 20:22:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:10.613 20:22:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:10.613 20:22:08 -- common/autotest_common.sh@1177 -- # local i=0 00:27:10.613 20:22:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:10.613 20:22:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:10.613 20:22:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:13.142 20:22:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:13.142 20:22:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:13.142 20:22:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:27:13.142 20:22:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:13.142 20:22:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:13.142 20:22:10 -- common/autotest_common.sh@1187 -- # return 0 00:27:13.142 20:22:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:13.142 20:22:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:14.524 20:22:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:14.524 20:22:12 -- common/autotest_common.sh@1177 -- # local i=0 00:27:14.524 20:22:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:14.524 20:22:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:14.524 20:22:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:17.054 20:22:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:17.054 20:22:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:17.054 20:22:14 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:27:17.054 20:22:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:17.054 20:22:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:17.054 20:22:14 -- common/autotest_common.sh@1187 -- # return 0 00:27:17.054 20:22:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.054 20:22:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:18.431 20:22:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:18.431 20:22:16 -- common/autotest_common.sh@1177 -- # local i=0 00:27:18.431 20:22:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:18.431 20:22:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:18.431 20:22:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:20.423 20:22:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:20.423 20:22:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:20.423 20:22:18 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:27:20.423 20:22:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:20.423 20:22:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:20.423 20:22:18 -- common/autotest_common.sh@1187 -- # return 0 00:27:20.423 20:22:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:20.423 20:22:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:22.325 20:22:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:22.325 20:22:19 -- common/autotest_common.sh@1177 -- # local i=0 00:27:22.325 20:22:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:22.325 20:22:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:22.325 20:22:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:24.229 20:22:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:24.229 20:22:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:24.229 20:22:21 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:27:24.229 20:22:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:24.229 20:22:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:24.229 20:22:21 -- common/autotest_common.sh@1187 -- # return 0 00:27:24.229 20:22:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.229 20:22:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:26.135 20:22:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:26.135 20:22:23 -- common/autotest_common.sh@1177 -- # local i=0 00:27:26.135 20:22:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:26.135 20:22:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:26.135 20:22:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:28.038 20:22:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:28.038 20:22:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:28.038 20:22:25 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:27:28.038 20:22:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:28.038 20:22:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:28.038 20:22:25 -- common/autotest_common.sh@1187 -- # return 0 00:27:28.038 20:22:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.038 20:22:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:29.945 20:22:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:29.945 20:22:27 -- common/autotest_common.sh@1177 -- # local i=0 00:27:29.945 20:22:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:29.945 20:22:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:29.945 20:22:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:31.850 20:22:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:31.850 20:22:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:31.850 20:22:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:27:31.850 20:22:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:31.850 20:22:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:31.850 20:22:29 -- common/autotest_common.sh@1187 -- # return 0 00:27:31.850 20:22:29 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:31.850 [global] 00:27:31.850 thread=1 00:27:31.850 invalidate=1 00:27:31.850 rw=read 00:27:31.850 time_based=1 00:27:31.850 runtime=10 00:27:31.850 ioengine=libaio 00:27:31.850 direct=1 00:27:31.850 bs=262144 00:27:31.850 iodepth=64 00:27:31.850 norandommap=1 00:27:31.850 numjobs=1 00:27:31.850 00:27:31.850 [job0] 00:27:31.850 filename=/dev/nvme0n1 00:27:31.850 [job1] 00:27:31.850 filename=/dev/nvme10n1 00:27:31.850 [job2] 00:27:31.850 filename=/dev/nvme1n1 00:27:31.850 [job3] 00:27:31.850 filename=/dev/nvme2n1 00:27:31.850 [job4] 00:27:31.850 filename=/dev/nvme3n1 00:27:31.850 [job5] 00:27:31.850 filename=/dev/nvme4n1 00:27:31.850 [job6] 00:27:31.850 filename=/dev/nvme5n1 00:27:31.850 [job7] 00:27:31.850 filename=/dev/nvme6n1 00:27:31.850 [job8] 00:27:31.850 filename=/dev/nvme7n1 00:27:31.850 [job9] 00:27:31.850 filename=/dev/nvme8n1 00:27:31.850 [job10] 00:27:31.850 filename=/dev/nvme9n1 00:27:32.117 Could not set queue depth (nvme0n1) 00:27:32.117 Could not set queue depth (nvme10n1) 00:27:32.117 Could not set queue depth (nvme1n1) 00:27:32.117 Could not set queue depth (nvme2n1) 00:27:32.117 Could not set queue depth (nvme3n1) 00:27:32.117 Could not set queue depth (nvme4n1) 00:27:32.117 Could not set queue depth (nvme5n1) 00:27:32.117 Could not set queue depth (nvme6n1) 00:27:32.117 Could not set queue depth (nvme7n1) 00:27:32.117 Could not set queue depth (nvme8n1) 00:27:32.117 Could not set queue depth (nvme9n1) 00:27:32.378 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:32.378 fio-3.35 00:27:32.378 Starting 11 threads 00:27:44.600 00:27:44.600 job0: (groupid=0, jobs=1): err= 0: pid=1662585: Thu Apr 25 20:22:40 2024 00:27:44.600 read: IOPS=686, BW=172MiB/s (180MB/s)(1733MiB/10098msec) 00:27:44.600 slat (usec): min=7, max=90969, avg=1290.76, stdev=4300.58 00:27:44.600 clat (msec): min=5, max=234, avg=91.87, stdev=42.54 00:27:44.600 lat (msec): min=5, max=236, avg=93.16, stdev=43.25 00:27:44.600 clat percentiles (msec): 00:27:44.600 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 45], 00:27:44.600 | 30.00th=[ 62], 40.00th=[ 83], 50.00th=[ 99], 60.00th=[ 109], 00:27:44.600 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 159], 00:27:44.600 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 197], 99.95th=[ 226], 00:27:44.600 | 99.99th=[ 234] 00:27:44.600 bw ( KiB/s): min=94208, max=508928, per=8.49%, avg=175821.75, stdev=95585.01, samples=20 00:27:44.600 iops : min= 368, max= 1988, avg=686.75, stdev=373.40, samples=20 00:27:44.600 lat (msec) : 10=0.12%, 20=0.43%, 50=21.62%, 100=29.02%, 250=48.80% 00:27:44.600 cpu : usr=0.05%, sys=1.42%, ctx=1452, majf=0, minf=4097 00:27:44.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:44.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.600 issued rwts: total=6932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.600 job1: (groupid=0, jobs=1): err= 0: pid=1662586: Thu Apr 25 20:22:40 2024 00:27:44.600 read: IOPS=744, BW=186MiB/s (195MB/s)(1875MiB/10079msec) 00:27:44.600 slat (usec): min=6, max=118349, avg=1307.96, stdev=4007.52 00:27:44.600 clat (msec): min=4, max=252, avg=84.65, stdev=37.46 00:27:44.600 lat (msec): min=5, max=252, avg=85.96, stdev=38.03 00:27:44.600 clat percentiles (msec): 00:27:44.600 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 42], 00:27:44.600 | 30.00th=[ 64], 40.00th=[ 80], 50.00th=[ 90], 60.00th=[ 99], 00:27:44.600 | 70.00th=[ 107], 80.00th=[ 115], 90.00th=[ 131], 95.00th=[ 142], 00:27:44.600 | 99.00th=[ 165], 99.50th=[ 178], 99.90th=[ 213], 99.95th=[ 213], 00:27:44.600 | 99.99th=[ 253] 00:27:44.600 bw ( KiB/s): min=108544, max=487424, per=9.19%, avg=190332.90, stdev=91768.94, samples=20 00:27:44.600 iops : min= 424, max= 1904, avg=743.40, stdev=358.51, samples=20 00:27:44.600 lat (msec) : 10=0.52%, 20=1.19%, 50=19.91%, 100=40.26%, 250=38.11% 00:27:44.600 lat (msec) : 500=0.01% 00:27:44.600 cpu : usr=0.15%, sys=1.37%, ctx=1410, majf=0, minf=4097 00:27:44.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:44.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.600 issued rwts: total=7499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.600 job2: (groupid=0, jobs=1): err= 0: pid=1662587: Thu Apr 25 20:22:40 2024 00:27:44.600 read: IOPS=963, BW=241MiB/s (253MB/s)(2422MiB/10056msec) 00:27:44.600 slat (usec): min=7, max=47747, avg=932.91, stdev=2909.15 00:27:44.600 clat (msec): min=6, max=193, avg=65.46, stdev=32.04 00:27:44.600 lat (msec): min=6, max=194, avg=66.39, stdev=32.45 00:27:44.600 clat percentiles (msec): 00:27:44.600 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 35], 00:27:44.600 | 30.00th=[ 42], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 68], 00:27:44.600 | 70.00th=[ 80], 80.00th=[ 95], 90.00th=[ 112], 95.00th=[ 127], 00:27:44.600 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 174], 00:27:44.600 | 99.99th=[ 194] 00:27:44.600 bw ( KiB/s): min=122880, max=451584, per=11.89%, avg=246378.40, stdev=102868.83, samples=20 00:27:44.600 iops : min= 480, max= 1764, avg=962.35, stdev=401.83, samples=20 00:27:44.600 lat (msec) : 10=0.28%, 20=0.35%, 50=38.45%, 100=45.07%, 250=15.85% 00:27:44.600 cpu : usr=0.15%, sys=2.08%, ctx=1874, majf=0, minf=4097 00:27:44.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:27:44.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.600 issued rwts: total=9687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.600 job3: (groupid=0, jobs=1): err= 0: pid=1662588: Thu Apr 25 20:22:40 2024 00:27:44.600 read: IOPS=620, BW=155MiB/s (163MB/s)(1573MiB/10138msec) 00:27:44.600 slat (usec): min=5, max=64519, avg=882.86, stdev=3665.92 00:27:44.600 clat (msec): min=5, max=242, avg=102.12, stdev=36.88 00:27:44.601 lat (msec): min=5, max=242, avg=103.00, stdev=37.37 00:27:44.601 clat percentiles (msec): 00:27:44.601 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 50], 20.00th=[ 72], 00:27:44.601 | 30.00th=[ 92], 40.00th=[ 102], 50.00th=[ 107], 60.00th=[ 113], 00:27:44.601 | 70.00th=[ 121], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 157], 00:27:44.601 | 99.00th=[ 182], 99.50th=[ 203], 99.90th=[ 236], 99.95th=[ 243], 00:27:44.601 | 99.99th=[ 243] 00:27:44.601 bw ( KiB/s): min=94019, max=247808, per=7.70%, avg=159445.90, stdev=38172.76, samples=20 00:27:44.601 iops : min= 367, max= 968, avg=622.75, stdev=149.16, samples=20 00:27:44.601 lat (msec) : 10=0.32%, 20=1.86%, 50=7.95%, 100=29.08%, 250=60.80% 00:27:44.601 cpu : usr=0.12%, sys=1.54%, ctx=1466, majf=0, minf=4097 00:27:44.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.601 issued rwts: total=6293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.601 job4: (groupid=0, jobs=1): err= 0: pid=1662589: Thu Apr 25 20:22:40 2024 00:27:44.601 read: IOPS=783, BW=196MiB/s (205MB/s)(1976MiB/10094msec) 00:27:44.601 slat (usec): min=7, max=41886, avg=1160.02, stdev=3427.47 00:27:44.601 clat (usec): min=1467, max=198806, avg=80506.25, stdev=35129.05 00:27:44.601 lat (usec): min=1492, max=198834, avg=81666.27, stdev=35607.69 00:27:44.601 clat percentiles (msec): 00:27:44.601 | 1.00th=[ 12], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 33], 00:27:44.601 | 30.00th=[ 67], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 94], 00:27:44.601 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 124], 95.00th=[ 133], 00:27:44.601 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 180], 99.95th=[ 180], 00:27:44.601 | 99.99th=[ 199] 00:27:44.601 bw ( KiB/s): min=120832, max=479232, per=9.69%, avg=200724.80, stdev=93780.14, samples=20 00:27:44.601 iops : min= 472, max= 1872, avg=784.00, stdev=366.35, samples=20 00:27:44.601 lat (msec) : 2=0.04%, 4=0.38%, 10=0.39%, 20=1.92%, 50=19.99% 00:27:44.601 lat (msec) : 100=44.78%, 250=32.50% 00:27:44.601 cpu : usr=0.16%, sys=2.03%, ctx=1552, majf=0, minf=4097 00:27:44.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.601 issued rwts: total=7905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.601 job5: (groupid=0, jobs=1): err= 0: pid=1662590: Thu Apr 25 20:22:40 2024 00:27:44.601 read: IOPS=694, BW=174MiB/s (182MB/s)(1749MiB/10079msec) 00:27:44.601 slat (usec): min=7, max=79664, avg=1205.00, stdev=3901.77 00:27:44.601 clat (msec): min=2, max=204, avg=90.93, stdev=34.35 00:27:44.601 lat (msec): min=2, max=222, avg=92.14, stdev=34.84 00:27:44.601 clat percentiles (msec): 00:27:44.601 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 48], 20.00th=[ 62], 00:27:44.601 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 104], 00:27:44.601 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 142], 00:27:44.601 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 184], 99.95th=[ 186], 00:27:44.601 | 99.99th=[ 205] 00:27:44.601 bw ( KiB/s): min=127488, max=270848, per=8.56%, avg=177424.30, stdev=39838.63, samples=20 00:27:44.601 iops : min= 498, max= 1058, avg=692.95, stdev=155.64, samples=20 00:27:44.601 lat (msec) : 4=0.03%, 10=0.96%, 20=2.93%, 50=7.45%, 100=43.92% 00:27:44.601 lat (msec) : 250=44.72% 00:27:44.601 cpu : usr=0.17%, sys=1.88%, ctx=1425, majf=0, minf=4097 00:27:44.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.601 issued rwts: total=6995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.601 job6: (groupid=0, jobs=1): err= 0: pid=1662591: Thu Apr 25 20:22:40 2024 00:27:44.601 read: IOPS=644, BW=161MiB/s (169MB/s)(1623MiB/10074msec) 00:27:44.601 slat (usec): min=5, max=91318, avg=987.76, stdev=3878.77 00:27:44.601 clat (msec): min=2, max=219, avg=98.27, stdev=39.05 00:27:44.601 lat (msec): min=2, max=251, avg=99.26, stdev=39.43 00:27:44.601 clat percentiles (msec): 00:27:44.601 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 38], 20.00th=[ 67], 00:27:44.601 | 30.00th=[ 84], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 113], 00:27:44.601 | 70.00th=[ 123], 80.00th=[ 130], 90.00th=[ 140], 95.00th=[ 153], 00:27:44.601 | 99.00th=[ 174], 99.50th=[ 184], 99.90th=[ 197], 99.95th=[ 197], 00:27:44.601 | 99.99th=[ 220] 00:27:44.601 bw ( KiB/s): min=98816, max=267264, per=7.94%, avg=164532.65, stdev=42814.47, samples=20 00:27:44.601 iops : min= 386, max= 1044, avg=642.55, stdev=167.36, samples=20 00:27:44.601 lat (msec) : 4=0.17%, 10=2.42%, 20=3.44%, 50=6.27%, 100=30.70% 00:27:44.601 lat (msec) : 250=57.00% 00:27:44.601 cpu : usr=0.12%, sys=1.65%, ctx=1457, majf=0, minf=4097 00:27:44.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.601 issued rwts: total=6491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.601 job7: (groupid=0, jobs=1): err= 0: pid=1662592: Thu Apr 25 20:22:40 2024 00:27:44.601 read: IOPS=1035, BW=259MiB/s (272MB/s)(2604MiB/10053msec) 00:27:44.601 slat (usec): min=6, max=81793, avg=937.86, stdev=3152.85 00:27:44.601 clat (msec): min=9, max=209, avg=60.80, stdev=40.55 00:27:44.601 lat (msec): min=9, max=222, avg=61.74, stdev=41.18 00:27:44.601 clat percentiles (msec): 00:27:44.601 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:27:44.601 | 30.00th=[ 30], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 60], 00:27:44.601 | 70.00th=[ 86], 80.00th=[ 102], 90.00th=[ 123], 95.00th=[ 138], 00:27:44.601 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 194], 99.95th=[ 199], 00:27:44.601 | 99.99th=[ 201] 00:27:44.601 bw ( KiB/s): min=86528, max=541242, per=12.79%, avg=264969.85, stdev=165281.51, samples=20 00:27:44.601 iops : min= 338, max= 2114, avg=1035.00, stdev=645.60, samples=20 00:27:44.601 lat (msec) : 10=0.02%, 20=1.56%, 50=56.16%, 100=21.62%, 250=20.65% 00:27:44.601 cpu : usr=0.25%, sys=2.35%, ctx=2016, majf=0, minf=4097 00:27:44.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.601 issued rwts: total=10414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.601 job8: (groupid=0, jobs=1): err= 0: pid=1662608: Thu Apr 25 20:22:40 2024 00:27:44.601 read: IOPS=606, BW=152MiB/s (159MB/s)(1532MiB/10096msec) 00:27:44.601 slat (usec): min=11, max=44200, avg=1614.79, stdev=4217.33 00:27:44.601 clat (msec): min=29, max=217, avg=103.74, stdev=26.60 00:27:44.601 lat (msec): min=29, max=217, avg=105.36, stdev=27.02 00:27:44.601 clat percentiles (msec): 00:27:44.601 | 1.00th=[ 52], 5.00th=[ 62], 10.00th=[ 67], 20.00th=[ 78], 00:27:44.601 | 30.00th=[ 89], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 113], 00:27:44.601 | 70.00th=[ 120], 80.00th=[ 126], 90.00th=[ 136], 95.00th=[ 146], 00:27:44.601 | 99.00th=[ 165], 99.50th=[ 180], 99.90th=[ 203], 99.95th=[ 203], 00:27:44.601 | 99.99th=[ 218] 00:27:44.601 bw ( KiB/s): min=104960, max=250368, per=7.49%, avg=155237.20, stdev=36269.68, samples=20 00:27:44.601 iops : min= 410, max= 978, avg=606.30, stdev=141.68, samples=20 00:27:44.601 lat (msec) : 50=0.55%, 100=41.79%, 250=57.65% 00:27:44.601 cpu : usr=0.14%, sys=1.98%, ctx=1247, majf=0, minf=4097 00:27:44.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.601 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.601 job9: (groupid=0, jobs=1): err= 0: pid=1662621: Thu Apr 25 20:22:40 2024 00:27:44.601 read: IOPS=643, BW=161MiB/s (169MB/s)(1623MiB/10096msec) 00:27:44.601 slat (usec): min=5, max=81127, avg=1043.99, stdev=3735.82 00:27:44.601 clat (usec): min=833, max=197466, avg=98405.73, stdev=40062.24 00:27:44.601 lat (usec): min=859, max=246808, avg=99449.72, stdev=40532.06 00:27:44.601 clat percentiles (msec): 00:27:44.601 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 29], 20.00th=[ 72], 00:27:44.601 | 30.00th=[ 90], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 112], 00:27:44.601 | 70.00th=[ 121], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 153], 00:27:44.601 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 194], 99.95th=[ 194], 00:27:44.601 | 99.99th=[ 199] 00:27:44.601 bw ( KiB/s): min=111616, max=268263, per=7.94%, avg=164494.00, stdev=40721.78, samples=20 00:27:44.601 iops : min= 436, max= 1047, avg=642.45, stdev=158.98, samples=20 00:27:44.601 lat (usec) : 1000=0.05% 00:27:44.601 lat (msec) : 2=0.12%, 4=0.66%, 10=2.97%, 20=3.79%, 50=7.41% 00:27:44.601 lat (msec) : 100=27.63%, 250=57.36% 00:27:44.601 cpu : usr=0.14%, sys=1.62%, ctx=1556, majf=0, minf=3597 00:27:44.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:44.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.601 issued rwts: total=6492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.601 job10: (groupid=0, jobs=1): err= 0: pid=1662631: Thu Apr 25 20:22:40 2024 00:27:44.601 read: IOPS=714, BW=179MiB/s (187MB/s)(1803MiB/10099msec) 00:27:44.601 slat (usec): min=6, max=99182, avg=780.64, stdev=3746.68 00:27:44.601 clat (usec): min=813, max=224116, avg=88765.48, stdev=42767.08 00:27:44.601 lat (usec): min=836, max=224143, avg=89546.12, stdev=43297.81 00:27:44.601 clat percentiles (msec): 00:27:44.601 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 26], 20.00th=[ 45], 00:27:44.601 | 30.00th=[ 69], 40.00th=[ 81], 50.00th=[ 93], 60.00th=[ 104], 00:27:44.601 | 70.00th=[ 117], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 150], 00:27:44.601 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 194], 99.95th=[ 203], 00:27:44.601 | 99.99th=[ 224] 00:27:44.601 bw ( KiB/s): min=115712, max=332800, per=8.83%, avg=182950.70, stdev=58469.38, samples=20 00:27:44.601 iops : min= 452, max= 1300, avg=714.60, stdev=228.44, samples=20 00:27:44.601 lat (usec) : 1000=0.03% 00:27:44.602 lat (msec) : 2=0.78%, 4=0.51%, 10=1.65%, 20=4.78%, 50=14.67% 00:27:44.602 lat (msec) : 100=34.54%, 250=43.04% 00:27:44.602 cpu : usr=0.12%, sys=1.83%, ctx=1732, majf=0, minf=4097 00:27:44.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:27:44.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:44.602 issued rwts: total=7212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:44.602 00:27:44.602 Run status group 0 (all jobs): 00:27:44.602 READ: bw=2023MiB/s (2122MB/s), 152MiB/s-259MiB/s (159MB/s-272MB/s), io=20.0GiB (21.5GB), run=10053-10138msec 00:27:44.602 00:27:44.602 Disk stats (read/write): 00:27:44.602 nvme0n1: ios=13716/0, merge=0/0, ticks=1241862/0, in_queue=1241862, util=95.36% 00:27:44.602 nvme10n1: ios=14846/0, merge=0/0, ticks=1241147/0, in_queue=1241147, util=95.73% 00:27:44.602 nvme1n1: ios=19246/0, merge=0/0, ticks=1245117/0, in_queue=1245117, util=96.23% 00:27:44.602 nvme2n1: ios=12455/0, merge=0/0, ticks=1248693/0, in_queue=1248693, util=96.54% 00:27:44.602 nvme3n1: ios=15671/0, merge=0/0, ticks=1243633/0, in_queue=1243633, util=96.65% 00:27:44.602 nvme4n1: ios=13863/0, merge=0/0, ticks=1246690/0, in_queue=1246690, util=97.30% 00:27:44.602 nvme5n1: ios=12846/0, merge=0/0, ticks=1248471/0, in_queue=1248471, util=97.54% 00:27:44.602 nvme6n1: ios=20689/0, merge=0/0, ticks=1242509/0, in_queue=1242509, util=97.78% 00:27:44.602 nvme7n1: ios=12128/0, merge=0/0, ticks=1238952/0, in_queue=1238952, util=98.54% 00:27:44.602 nvme8n1: ios=12856/0, merge=0/0, ticks=1248125/0, in_queue=1248125, util=98.93% 00:27:44.602 nvme9n1: ios=14291/0, merge=0/0, ticks=1248810/0, in_queue=1248810, util=99.16% 00:27:44.602 20:22:40 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:44.602 [global] 00:27:44.602 thread=1 00:27:44.602 invalidate=1 00:27:44.602 rw=randwrite 00:27:44.602 time_based=1 00:27:44.602 runtime=10 00:27:44.602 ioengine=libaio 00:27:44.602 direct=1 00:27:44.602 bs=262144 00:27:44.602 iodepth=64 00:27:44.602 norandommap=1 00:27:44.602 numjobs=1 00:27:44.602 00:27:44.602 [job0] 00:27:44.602 filename=/dev/nvme0n1 00:27:44.602 [job1] 00:27:44.602 filename=/dev/nvme10n1 00:27:44.602 [job2] 00:27:44.602 filename=/dev/nvme1n1 00:27:44.602 [job3] 00:27:44.602 filename=/dev/nvme2n1 00:27:44.602 [job4] 00:27:44.602 filename=/dev/nvme3n1 00:27:44.602 [job5] 00:27:44.602 filename=/dev/nvme4n1 00:27:44.602 [job6] 00:27:44.602 filename=/dev/nvme5n1 00:27:44.602 [job7] 00:27:44.602 filename=/dev/nvme6n1 00:27:44.602 [job8] 00:27:44.602 filename=/dev/nvme7n1 00:27:44.602 [job9] 00:27:44.602 filename=/dev/nvme8n1 00:27:44.602 [job10] 00:27:44.602 filename=/dev/nvme9n1 00:27:44.602 Could not set queue depth (nvme0n1) 00:27:44.602 Could not set queue depth (nvme10n1) 00:27:44.602 Could not set queue depth (nvme1n1) 00:27:44.602 Could not set queue depth (nvme2n1) 00:27:44.602 Could not set queue depth (nvme3n1) 00:27:44.602 Could not set queue depth (nvme4n1) 00:27:44.602 Could not set queue depth (nvme5n1) 00:27:44.602 Could not set queue depth (nvme6n1) 00:27:44.602 Could not set queue depth (nvme7n1) 00:27:44.602 Could not set queue depth (nvme8n1) 00:27:44.602 Could not set queue depth (nvme9n1) 00:27:44.602 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:44.602 fio-3.35 00:27:44.602 Starting 11 threads 00:27:54.568 00:27:54.568 job0: (groupid=0, jobs=1): err= 0: pid=1664507: Thu Apr 25 20:22:51 2024 00:27:54.568 write: IOPS=481, BW=120MiB/s (126MB/s)(1218MiB/10118msec); 0 zone resets 00:27:54.568 slat (usec): min=19, max=52180, avg=1970.38, stdev=3857.20 00:27:54.568 clat (msec): min=3, max=241, avg=130.95, stdev=33.53 00:27:54.568 lat (msec): min=3, max=242, avg=132.92, stdev=33.90 00:27:54.568 clat percentiles (msec): 00:27:54.568 | 1.00th=[ 36], 5.00th=[ 94], 10.00th=[ 99], 20.00th=[ 103], 00:27:54.568 | 30.00th=[ 106], 40.00th=[ 120], 50.00th=[ 128], 60.00th=[ 140], 00:27:54.568 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 171], 95.00th=[ 180], 00:27:54.568 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 234], 99.95th=[ 234], 00:27:54.568 | 99.99th=[ 243] 00:27:54.568 bw ( KiB/s): min=88064, max=160256, per=8.49%, avg=123059.20, stdev=24938.95, samples=20 00:27:54.568 iops : min= 344, max= 626, avg=480.70, stdev=97.42, samples=20 00:27:54.568 lat (msec) : 4=0.02%, 10=0.08%, 20=0.02%, 50=1.75%, 100=12.28% 00:27:54.568 lat (msec) : 250=85.85% 00:27:54.568 cpu : usr=1.41%, sys=1.15%, ctx=1507, majf=0, minf=1 00:27:54.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:54.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.568 issued rwts: total=0,4870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.568 job1: (groupid=0, jobs=1): err= 0: pid=1664508: Thu Apr 25 20:22:51 2024 00:27:54.568 write: IOPS=561, BW=140MiB/s (147MB/s)(1421MiB/10117msec); 0 zone resets 00:27:54.568 slat (usec): min=24, max=30657, avg=1679.34, stdev=3116.15 00:27:54.568 clat (msec): min=4, max=242, avg=112.11, stdev=31.47 00:27:54.568 lat (msec): min=6, max=242, avg=113.79, stdev=31.82 00:27:54.568 clat percentiles (msec): 00:27:54.568 | 1.00th=[ 22], 5.00th=[ 74], 10.00th=[ 92], 20.00th=[ 96], 00:27:54.568 | 30.00th=[ 99], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 105], 00:27:54.568 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 153], 95.00th=[ 180], 00:27:54.568 | 99.00th=[ 207], 99.50th=[ 209], 99.90th=[ 234], 99.95th=[ 234], 00:27:54.568 | 99.99th=[ 243] 00:27:54.568 bw ( KiB/s): min=92160, max=179712, per=9.93%, avg=143923.20, stdev=26670.55, samples=20 00:27:54.568 iops : min= 360, max= 702, avg=562.20, stdev=104.18, samples=20 00:27:54.568 lat (msec) : 10=0.16%, 20=0.70%, 50=2.36%, 100=41.78%, 250=55.00% 00:27:54.568 cpu : usr=1.79%, sys=1.36%, ctx=1735, majf=0, minf=1 00:27:54.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:54.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.568 issued rwts: total=0,5685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.568 job2: (groupid=0, jobs=1): err= 0: pid=1664520: Thu Apr 25 20:22:51 2024 00:27:54.568 write: IOPS=768, BW=192MiB/s (202MB/s)(1938MiB/10083msec); 0 zone resets 00:27:54.568 slat (usec): min=19, max=11317, avg=1200.77, stdev=2177.33 00:27:54.569 clat (msec): min=14, max=199, avg=82.02, stdev=20.37 00:27:54.569 lat (msec): min=14, max=199, avg=83.22, stdev=20.50 00:27:54.569 clat percentiles (msec): 00:27:54.569 | 1.00th=[ 33], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 61], 00:27:54.569 | 30.00th=[ 63], 40.00th=[ 80], 50.00th=[ 91], 60.00th=[ 94], 00:27:54.569 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 99], 95.00th=[ 101], 00:27:54.569 | 99.00th=[ 148], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 197], 00:27:54.569 | 99.99th=[ 201] 00:27:54.569 bw ( KiB/s): min=161792, max=275456, per=13.58%, avg=196812.80, stdev=40090.27, samples=20 00:27:54.569 iops : min= 632, max= 1076, avg=768.80, stdev=156.60, samples=20 00:27:54.569 lat (msec) : 20=0.08%, 50=1.74%, 100=93.11%, 250=5.07% 00:27:54.569 cpu : usr=2.31%, sys=2.10%, ctx=2378, majf=0, minf=1 00:27:54.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:54.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.569 issued rwts: total=0,7751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.569 job3: (groupid=0, jobs=1): err= 0: pid=1664521: Thu Apr 25 20:22:51 2024 00:27:54.569 write: IOPS=458, BW=115MiB/s (120MB/s)(1163MiB/10144msec); 0 zone resets 00:27:54.569 slat (usec): min=22, max=40455, avg=2111.37, stdev=3839.12 00:27:54.569 clat (msec): min=9, max=296, avg=137.31, stdev=29.25 00:27:54.569 lat (msec): min=9, max=296, avg=139.43, stdev=29.47 00:27:54.569 clat percentiles (msec): 00:27:54.569 | 1.00th=[ 48], 5.00th=[ 90], 10.00th=[ 112], 20.00th=[ 120], 00:27:54.569 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 136], 60.00th=[ 148], 00:27:54.569 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 171], 95.00th=[ 182], 00:27:54.569 | 99.00th=[ 194], 99.50th=[ 236], 99.90th=[ 288], 99.95th=[ 288], 00:27:54.569 | 99.99th=[ 296] 00:27:54.569 bw ( KiB/s): min=90112, max=149504, per=8.11%, avg=117504.00, stdev=18420.39, samples=20 00:27:54.569 iops : min= 352, max= 584, avg=459.00, stdev=71.95, samples=20 00:27:54.569 lat (msec) : 10=0.02%, 20=0.24%, 50=0.88%, 100=7.44%, 250=91.04% 00:27:54.569 lat (msec) : 500=0.39% 00:27:54.569 cpu : usr=1.41%, sys=1.05%, ctx=1285, majf=0, minf=1 00:27:54.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:54.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.569 issued rwts: total=0,4653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.569 job4: (groupid=0, jobs=1): err= 0: pid=1664522: Thu Apr 25 20:22:51 2024 00:27:54.569 write: IOPS=449, BW=112MiB/s (118MB/s)(1141MiB/10146msec); 0 zone resets 00:27:54.569 slat (usec): min=26, max=80254, avg=2189.11, stdev=3981.57 00:27:54.569 clat (msec): min=79, max=300, avg=140.06, stdev=26.62 00:27:54.569 lat (msec): min=84, max=300, avg=142.25, stdev=26.70 00:27:54.569 clat percentiles (msec): 00:27:54.569 | 1.00th=[ 86], 5.00th=[ 92], 10.00th=[ 114], 20.00th=[ 121], 00:27:54.569 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 140], 60.00th=[ 150], 00:27:54.569 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 171], 95.00th=[ 180], 00:27:54.569 | 99.00th=[ 203], 99.50th=[ 241], 99.90th=[ 292], 99.95th=[ 292], 00:27:54.569 | 99.99th=[ 300] 00:27:54.569 bw ( KiB/s): min=92160, max=149504, per=7.95%, avg=115174.40, stdev=17029.37, samples=20 00:27:54.569 iops : min= 360, max= 584, avg=449.90, stdev=66.52, samples=20 00:27:54.569 lat (msec) : 100=6.64%, 250=92.88%, 500=0.48% 00:27:54.569 cpu : usr=1.39%, sys=1.24%, ctx=1193, majf=0, minf=1 00:27:54.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:54.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.569 issued rwts: total=0,4562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.569 job5: (groupid=0, jobs=1): err= 0: pid=1664523: Thu Apr 25 20:22:51 2024 00:27:54.569 write: IOPS=400, BW=100MiB/s (105MB/s)(1016MiB/10144msec); 0 zone resets 00:27:54.569 slat (usec): min=22, max=132212, avg=2396.03, stdev=5066.64 00:27:54.569 clat (msec): min=61, max=309, avg=156.97, stdev=28.34 00:27:54.569 lat (msec): min=63, max=309, avg=159.37, stdev=28.36 00:27:54.569 clat percentiles (msec): 00:27:54.569 | 1.00th=[ 85], 5.00th=[ 118], 10.00th=[ 124], 20.00th=[ 131], 00:27:54.569 | 30.00th=[ 146], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:27:54.569 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 207], 00:27:54.569 | 99.00th=[ 251], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 309], 00:27:54.569 | 99.99th=[ 309] 00:27:54.569 bw ( KiB/s): min=79872, max=134144, per=7.07%, avg=102451.20, stdev=13989.09, samples=20 00:27:54.569 iops : min= 312, max= 524, avg=400.20, stdev=54.64, samples=20 00:27:54.569 lat (msec) : 100=1.94%, 250=96.97%, 500=1.08% 00:27:54.569 cpu : usr=1.43%, sys=0.89%, ctx=1165, majf=0, minf=1 00:27:54.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:54.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.569 issued rwts: total=0,4065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.569 job6: (groupid=0, jobs=1): err= 0: pid=1664524: Thu Apr 25 20:22:51 2024 00:27:54.569 write: IOPS=572, BW=143MiB/s (150MB/s)(1448MiB/10120msec); 0 zone resets 00:27:54.569 slat (usec): min=21, max=37105, avg=1650.98, stdev=3128.22 00:27:54.569 clat (msec): min=4, max=240, avg=110.11, stdev=33.19 00:27:54.569 lat (msec): min=4, max=240, avg=111.76, stdev=33.62 00:27:54.569 clat percentiles (msec): 00:27:54.569 | 1.00th=[ 22], 5.00th=[ 67], 10.00th=[ 90], 20.00th=[ 93], 00:27:54.569 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 99], 60.00th=[ 101], 00:27:54.569 | 70.00th=[ 122], 80.00th=[ 132], 90.00th=[ 159], 95.00th=[ 178], 00:27:54.569 | 99.00th=[ 197], 99.50th=[ 207], 99.90th=[ 232], 99.95th=[ 232], 00:27:54.569 | 99.99th=[ 241] 00:27:54.569 bw ( KiB/s): min=88064, max=180224, per=10.12%, avg=146688.00, stdev=30957.34, samples=20 00:27:54.569 iops : min= 344, max= 704, avg=573.00, stdev=120.93, samples=20 00:27:54.569 lat (msec) : 10=0.12%, 20=0.76%, 50=2.78%, 100=57.91%, 250=38.43% 00:27:54.569 cpu : usr=1.59%, sys=1.54%, ctx=1820, majf=0, minf=1 00:27:54.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:54.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.569 issued rwts: total=0,5793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.569 job7: (groupid=0, jobs=1): err= 0: pid=1664525: Thu Apr 25 20:22:51 2024 00:27:54.569 write: IOPS=454, BW=114MiB/s (119MB/s)(1146MiB/10084msec); 0 zone resets 00:27:54.569 slat (usec): min=20, max=86310, avg=2095.09, stdev=4685.06 00:27:54.569 clat (msec): min=20, max=267, avg=138.59, stdev=40.87 00:27:54.569 lat (msec): min=20, max=267, avg=140.68, stdev=41.27 00:27:54.569 clat percentiles (msec): 00:27:54.569 | 1.00th=[ 58], 5.00th=[ 66], 10.00th=[ 89], 20.00th=[ 94], 00:27:54.569 | 30.00th=[ 120], 40.00th=[ 130], 50.00th=[ 148], 60.00th=[ 159], 00:27:54.569 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 207], 00:27:54.569 | 99.00th=[ 228], 99.50th=[ 239], 99.90th=[ 259], 99.95th=[ 268], 00:27:54.569 | 99.99th=[ 268] 00:27:54.569 bw ( KiB/s): min=82944, max=202752, per=7.98%, avg=115737.60, stdev=32513.74, samples=20 00:27:54.569 iops : min= 324, max= 792, avg=452.10, stdev=127.01, samples=20 00:27:54.569 lat (msec) : 50=0.61%, 100=27.05%, 250=72.01%, 500=0.33% 00:27:54.569 cpu : usr=1.33%, sys=1.10%, ctx=1340, majf=0, minf=1 00:27:54.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:54.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.569 issued rwts: total=0,4584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.569 job8: (groupid=0, jobs=1): err= 0: pid=1664526: Thu Apr 25 20:22:51 2024 00:27:54.569 write: IOPS=657, BW=164MiB/s (172MB/s)(1658MiB/10084msec); 0 zone resets 00:27:54.569 slat (usec): min=23, max=47150, avg=1478.93, stdev=2614.06 00:27:54.569 clat (msec): min=14, max=181, avg=95.83, stdev=13.80 00:27:54.569 lat (msec): min=15, max=181, avg=97.31, stdev=13.76 00:27:54.569 clat percentiles (msec): 00:27:54.569 | 1.00th=[ 60], 5.00th=[ 68], 10.00th=[ 81], 20.00th=[ 93], 00:27:54.569 | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 100], 00:27:54.569 | 70.00th=[ 101], 80.00th=[ 103], 90.00th=[ 105], 95.00th=[ 107], 00:27:54.569 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 174], 99.95th=[ 176], 00:27:54.569 | 99.99th=[ 182] 00:27:54.569 bw ( KiB/s): min=149504, max=225280, per=11.60%, avg=168115.20, stdev=16061.05, samples=20 00:27:54.569 iops : min= 584, max= 880, avg=656.70, stdev=62.74, samples=20 00:27:54.569 lat (msec) : 20=0.20%, 50=0.60%, 100=65.70%, 250=33.50% 00:27:54.569 cpu : usr=2.01%, sys=1.45%, ctx=1814, majf=0, minf=1 00:27:54.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:54.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.569 issued rwts: total=0,6630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.569 job9: (groupid=0, jobs=1): err= 0: pid=1664527: Thu Apr 25 20:22:51 2024 00:27:54.569 write: IOPS=438, BW=110MiB/s (115MB/s)(1105MiB/10084msec); 0 zone resets 00:27:54.569 slat (usec): min=25, max=102466, avg=2118.79, stdev=5240.71 00:27:54.569 clat (msec): min=6, max=385, avg=143.82, stdev=46.85 00:27:54.569 lat (msec): min=7, max=386, avg=145.94, stdev=47.44 00:27:54.569 clat percentiles (msec): 00:27:54.569 | 1.00th=[ 22], 5.00th=[ 59], 10.00th=[ 89], 20.00th=[ 95], 00:27:54.569 | 30.00th=[ 136], 40.00th=[ 150], 50.00th=[ 157], 60.00th=[ 161], 00:27:54.569 | 70.00th=[ 165], 80.00th=[ 171], 90.00th=[ 186], 95.00th=[ 207], 00:27:54.569 | 99.00th=[ 264], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 388], 00:27:54.569 | 99.99th=[ 388] 00:27:54.569 bw ( KiB/s): min=73728, max=176128, per=7.69%, avg=111513.60, stdev=29522.00, samples=20 00:27:54.569 iops : min= 288, max= 688, avg=435.60, stdev=115.32, samples=20 00:27:54.569 lat (msec) : 10=0.09%, 20=0.72%, 50=3.55%, 100=20.68%, 250=73.64% 00:27:54.569 lat (msec) : 500=1.31% 00:27:54.569 cpu : usr=1.35%, sys=1.11%, ctx=1482, majf=0, minf=1 00:27:54.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:54.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.569 issued rwts: total=0,4419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.569 job10: (groupid=0, jobs=1): err= 0: pid=1664528: Thu Apr 25 20:22:51 2024 00:27:54.569 write: IOPS=437, BW=109MiB/s (115MB/s)(1109MiB/10146msec); 0 zone resets 00:27:54.569 slat (usec): min=16, max=55892, avg=2112.25, stdev=4084.73 00:27:54.569 clat (msec): min=20, max=303, avg=144.25, stdev=32.96 00:27:54.569 lat (msec): min=23, max=303, avg=146.36, stdev=33.27 00:27:54.569 clat percentiles (msec): 00:27:54.569 | 1.00th=[ 47], 5.00th=[ 109], 10.00th=[ 116], 20.00th=[ 122], 00:27:54.569 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 144], 60.00th=[ 155], 00:27:54.569 | 70.00th=[ 161], 80.00th=[ 171], 90.00th=[ 184], 95.00th=[ 199], 00:27:54.569 | 99.00th=[ 222], 99.50th=[ 245], 99.90th=[ 296], 99.95th=[ 296], 00:27:54.569 | 99.99th=[ 305] 00:27:54.569 bw ( KiB/s): min=83968, max=133120, per=7.72%, avg=111913.50, stdev=17689.22, samples=20 00:27:54.569 iops : min= 328, max= 520, avg=437.15, stdev=69.11, samples=20 00:27:54.569 lat (msec) : 50=1.10%, 100=3.07%, 250=95.33%, 500=0.50% 00:27:54.569 cpu : usr=1.37%, sys=1.18%, ctx=1441, majf=0, minf=1 00:27:54.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:54.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.569 issued rwts: total=0,4435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.569 00:27:54.569 Run status group 0 (all jobs): 00:27:54.569 WRITE: bw=1416MiB/s (1484MB/s), 100MiB/s-192MiB/s (105MB/s-202MB/s), io=14.0GiB (15.1GB), run=10083-10146msec 00:27:54.569 00:27:54.569 Disk stats (read/write): 00:27:54.569 nvme0n1: ios=49/9560, merge=0/0, ticks=49/1208683, in_queue=1208732, util=97.63% 00:27:54.569 nvme10n1: ios=47/11195, merge=0/0, ticks=677/1210334, in_queue=1211011, util=99.98% 00:27:54.569 nvme1n1: ios=22/15306, merge=0/0, ticks=27/1213347, in_queue=1213374, util=97.81% 00:27:54.569 nvme2n1: ios=49/9138, merge=0/0, ticks=1620/1207001, in_queue=1208621, util=99.99% 00:27:54.569 nvme3n1: ios=50/8956, merge=0/0, ticks=749/1206956, in_queue=1207705, util=100.00% 00:27:54.569 nvme4n1: ios=52/7962, merge=0/0, ticks=5056/1197299, in_queue=1202355, util=100.00% 00:27:54.569 nvme5n1: ios=0/11404, merge=0/0, ticks=0/1210453, in_queue=1210453, util=98.39% 00:27:54.569 nvme6n1: ios=51/8977, merge=0/0, ticks=2971/1200600, in_queue=1203571, util=100.00% 00:27:54.569 nvme7n1: ios=0/13058, merge=0/0, ticks=0/1211448, in_queue=1211448, util=98.81% 00:27:54.569 nvme8n1: ios=46/8642, merge=0/0, ticks=3663/1188693, in_queue=1192356, util=100.00% 00:27:54.569 nvme9n1: ios=0/8708, merge=0/0, ticks=0/1210179, in_queue=1210179, util=99.10% 00:27:54.569 20:22:51 -- target/multiconnection.sh@36 -- # sync 00:27:54.569 20:22:51 -- target/multiconnection.sh@37 -- # seq 1 11 00:27:54.569 20:22:51 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:54.569 20:22:51 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:54.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:54.569 20:22:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:54.569 20:22:52 -- common/autotest_common.sh@1198 -- # local i=0 00:27:54.569 20:22:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:54.569 20:22:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:27:54.569 20:22:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:54.569 20:22:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:27:54.569 20:22:52 -- common/autotest_common.sh@1210 -- # return 0 00:27:54.569 20:22:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:54.569 20:22:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.569 20:22:52 -- common/autotest_common.sh@10 -- # set +x 00:27:54.569 20:22:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.569 20:22:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:54.569 20:22:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:54.828 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:54.828 20:22:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:54.828 20:22:52 -- common/autotest_common.sh@1198 -- # local i=0 00:27:54.828 20:22:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:27:54.828 20:22:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:54.828 20:22:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:54.828 20:22:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:27:54.828 20:22:52 -- common/autotest_common.sh@1210 -- # return 0 00:27:54.828 20:22:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:54.828 20:22:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.828 20:22:52 -- common/autotest_common.sh@10 -- # set +x 00:27:54.828 20:22:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.828 20:22:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:54.828 20:22:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:55.398 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:55.398 20:22:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:55.398 20:22:53 -- common/autotest_common.sh@1198 -- # local i=0 00:27:55.398 20:22:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:55.398 20:22:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:27:55.398 20:22:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:55.398 20:22:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:27:55.398 20:22:53 -- common/autotest_common.sh@1210 -- # return 0 00:27:55.398 20:22:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:55.398 20:22:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.398 20:22:53 -- common/autotest_common.sh@10 -- # set +x 00:27:55.398 20:22:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.398 20:22:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:55.398 20:22:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:55.657 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:55.657 20:22:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:55.657 20:22:53 -- common/autotest_common.sh@1198 -- # local i=0 00:27:55.657 20:22:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:55.657 20:22:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:27:55.657 20:22:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:55.657 20:22:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:27:55.657 20:22:53 -- common/autotest_common.sh@1210 -- # return 0 00:27:55.657 20:22:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:55.657 20:22:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.657 20:22:53 -- common/autotest_common.sh@10 -- # set +x 00:27:55.657 20:22:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.657 20:22:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:55.658 20:22:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:55.916 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:55.916 20:22:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:55.916 20:22:53 -- common/autotest_common.sh@1198 -- # local i=0 00:27:55.916 20:22:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:55.916 20:22:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:27:55.916 20:22:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:55.916 20:22:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:27:55.916 20:22:53 -- common/autotest_common.sh@1210 -- # return 0 00:27:55.916 20:22:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:55.916 20:22:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.916 20:22:53 -- common/autotest_common.sh@10 -- # set +x 00:27:55.916 20:22:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.916 20:22:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:55.916 20:22:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:56.174 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:56.174 20:22:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:56.174 20:22:53 -- common/autotest_common.sh@1198 -- # local i=0 00:27:56.174 20:22:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:27:56.174 20:22:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:56.174 20:22:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:56.174 20:22:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:27:56.174 20:22:54 -- common/autotest_common.sh@1210 -- # return 0 00:27:56.174 20:22:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:56.174 20:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.174 20:22:54 -- common/autotest_common.sh@10 -- # set +x 00:27:56.174 20:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.174 20:22:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:56.174 20:22:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:56.431 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:56.431 20:22:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:56.431 20:22:54 -- common/autotest_common.sh@1198 -- # local i=0 00:27:56.431 20:22:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:56.431 20:22:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:27:56.690 20:22:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:27:56.690 20:22:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:56.690 20:22:54 -- common/autotest_common.sh@1210 -- # return 0 00:27:56.690 20:22:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:56.690 20:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.690 20:22:54 -- common/autotest_common.sh@10 -- # set +x 00:27:56.690 20:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.690 20:22:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:56.690 20:22:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:56.690 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:56.690 20:22:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:56.690 20:22:54 -- common/autotest_common.sh@1198 -- # local i=0 00:27:56.690 20:22:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:56.690 20:22:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:27:56.690 20:22:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:56.690 20:22:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:27:56.690 20:22:54 -- common/autotest_common.sh@1210 -- # return 0 00:27:56.690 20:22:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:56.690 20:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.690 20:22:54 -- common/autotest_common.sh@10 -- # set +x 00:27:56.950 20:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.950 20:22:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:56.950 20:22:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:56.950 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:56.950 20:22:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:56.950 20:22:54 -- common/autotest_common.sh@1198 -- # local i=0 00:27:56.950 20:22:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:56.950 20:22:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:27:56.950 20:22:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:56.950 20:22:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:27:56.950 20:22:54 -- common/autotest_common.sh@1210 -- # return 0 00:27:56.950 20:22:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:56.950 20:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.950 20:22:54 -- common/autotest_common.sh@10 -- # set +x 00:27:56.950 20:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.950 20:22:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:56.950 20:22:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:57.210 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:57.210 20:22:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:57.210 20:22:55 -- common/autotest_common.sh@1198 -- # local i=0 00:27:57.210 20:22:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:57.210 20:22:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:27:57.210 20:22:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:57.210 20:22:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:27:57.210 20:22:55 -- common/autotest_common.sh@1210 -- # return 0 00:27:57.210 20:22:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:57.210 20:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.210 20:22:55 -- common/autotest_common.sh@10 -- # set +x 00:27:57.210 20:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.210 20:22:55 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:57.210 20:22:55 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:57.471 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:57.471 20:22:55 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:57.471 20:22:55 -- common/autotest_common.sh@1198 -- # local i=0 00:27:57.471 20:22:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:57.471 20:22:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:27:57.471 20:22:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:57.471 20:22:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:27:57.471 20:22:55 -- common/autotest_common.sh@1210 -- # return 0 00:27:57.471 20:22:55 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:57.471 20:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.471 20:22:55 -- common/autotest_common.sh@10 -- # set +x 00:27:57.471 20:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.471 20:22:55 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:57.471 20:22:55 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:57.471 20:22:55 -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:57.471 20:22:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:57.471 20:22:55 -- nvmf/common.sh@116 -- # sync 00:27:57.471 20:22:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:57.471 20:22:55 -- nvmf/common.sh@119 -- # set +e 00:27:57.471 20:22:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:57.471 20:22:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:57.471 rmmod nvme_tcp 00:27:57.471 rmmod nvme_fabrics 00:27:57.471 rmmod nvme_keyring 00:27:57.731 20:22:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:57.731 20:22:55 -- nvmf/common.sh@123 -- # set -e 00:27:57.731 20:22:55 -- nvmf/common.sh@124 -- # return 0 00:27:57.731 20:22:55 -- nvmf/common.sh@477 -- # '[' -n 1654054 ']' 00:27:57.731 20:22:55 -- nvmf/common.sh@478 -- # killprocess 1654054 00:27:57.731 20:22:55 -- common/autotest_common.sh@926 -- # '[' -z 1654054 ']' 00:27:57.731 20:22:55 -- common/autotest_common.sh@930 -- # kill -0 1654054 00:27:57.731 20:22:55 -- common/autotest_common.sh@931 -- # uname 00:27:57.731 20:22:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:57.731 20:22:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1654054 00:27:57.731 20:22:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:57.731 20:22:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:57.731 20:22:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1654054' 00:27:57.731 killing process with pid 1654054 00:27:57.731 20:22:55 -- common/autotest_common.sh@945 -- # kill 1654054 00:27:57.731 20:22:55 -- common/autotest_common.sh@950 -- # wait 1654054 00:27:59.107 20:22:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:59.107 20:22:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:59.107 20:22:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:59.107 20:22:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.107 20:22:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:59.107 20:22:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.107 20:22:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.107 20:22:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.015 20:22:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:01.015 00:28:01.015 real 1m16.437s 00:28:01.015 user 5m1.948s 00:28:01.015 sys 0m17.357s 00:28:01.015 20:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:01.015 20:22:58 -- common/autotest_common.sh@10 -- # set +x 00:28:01.015 ************************************ 00:28:01.015 END TEST nvmf_multiconnection 00:28:01.015 ************************************ 00:28:01.015 20:22:58 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:01.015 20:22:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:01.015 20:22:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:01.015 20:22:58 -- common/autotest_common.sh@10 -- # set +x 00:28:01.015 ************************************ 00:28:01.015 START TEST nvmf_initiator_timeout 00:28:01.015 ************************************ 00:28:01.015 20:22:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:01.015 * Looking for test storage... 00:28:01.015 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:28:01.015 20:22:58 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.015 20:22:58 -- nvmf/common.sh@7 -- # uname -s 00:28:01.015 20:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.015 20:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.015 20:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.015 20:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.015 20:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.015 20:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.015 20:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.015 20:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.015 20:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.015 20:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.015 20:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:01.015 20:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:01.015 20:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.015 20:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.015 20:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:01.015 20:22:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:01.015 20:22:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.015 20:22:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.015 20:22:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.016 20:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.016 20:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.016 20:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.016 20:22:58 -- paths/export.sh@5 -- # export PATH 00:28:01.016 20:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.016 20:22:58 -- nvmf/common.sh@46 -- # : 0 00:28:01.016 20:22:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:01.016 20:22:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:01.016 20:22:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:01.016 20:22:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.016 20:22:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.016 20:22:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:01.016 20:22:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:01.016 20:22:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:01.016 20:22:58 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:01.016 20:22:58 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:01.016 20:22:58 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:01.016 20:22:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:01.016 20:22:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.016 20:22:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:01.016 20:22:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:01.016 20:22:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:01.016 20:22:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.016 20:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.016 20:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.016 20:22:58 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:28:01.016 20:22:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:01.016 20:22:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:01.016 20:22:58 -- common/autotest_common.sh@10 -- # set +x 00:28:06.293 20:23:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:06.294 20:23:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:06.294 20:23:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:06.294 20:23:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:06.294 20:23:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:06.294 20:23:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:06.294 20:23:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:06.294 20:23:03 -- nvmf/common.sh@294 -- # net_devs=() 00:28:06.294 20:23:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:06.294 20:23:03 -- nvmf/common.sh@295 -- # e810=() 00:28:06.294 20:23:03 -- nvmf/common.sh@295 -- # local -ga e810 00:28:06.294 20:23:03 -- nvmf/common.sh@296 -- # x722=() 00:28:06.294 20:23:03 -- nvmf/common.sh@296 -- # local -ga x722 00:28:06.294 20:23:03 -- nvmf/common.sh@297 -- # mlx=() 00:28:06.294 20:23:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:06.294 20:23:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.294 20:23:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:06.294 20:23:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:06.294 20:23:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:06.294 20:23:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:06.294 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:06.294 20:23:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:06.294 20:23:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:06.294 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:06.294 20:23:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:06.294 20:23:03 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:06.294 20:23:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.294 20:23:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:06.294 20:23:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.294 20:23:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:06.294 Found net devices under 0000:27:00.0: cvl_0_0 00:28:06.294 20:23:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.294 20:23:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:06.294 20:23:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.294 20:23:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:06.294 20:23:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.294 20:23:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:06.294 Found net devices under 0000:27:00.1: cvl_0_1 00:28:06.294 20:23:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.294 20:23:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:06.294 20:23:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:06.294 20:23:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:06.294 20:23:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:06.294 20:23:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.294 20:23:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.294 20:23:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.294 20:23:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:06.294 20:23:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.294 20:23:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.294 20:23:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:06.294 20:23:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.294 20:23:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.294 20:23:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:06.294 20:23:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:06.294 20:23:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.294 20:23:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.294 20:23:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.294 20:23:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.294 20:23:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:06.294 20:23:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.294 20:23:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.294 20:23:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.294 20:23:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:06.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.756 ms 00:28:06.294 00:28:06.294 --- 10.0.0.2 ping statistics --- 00:28:06.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.294 rtt min/avg/max/mdev = 0.756/0.756/0.756/0.000 ms 00:28:06.294 20:23:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:28:06.294 00:28:06.294 --- 10.0.0.1 ping statistics --- 00:28:06.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.294 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:28:06.294 20:23:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.294 20:23:04 -- nvmf/common.sh@410 -- # return 0 00:28:06.294 20:23:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:06.294 20:23:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.294 20:23:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:06.294 20:23:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:06.294 20:23:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.294 20:23:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:06.294 20:23:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:06.294 20:23:04 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:06.294 20:23:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:06.294 20:23:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:06.294 20:23:04 -- common/autotest_common.sh@10 -- # set +x 00:28:06.294 20:23:04 -- nvmf/common.sh@469 -- # nvmfpid=1671417 00:28:06.294 20:23:04 -- nvmf/common.sh@470 -- # waitforlisten 1671417 00:28:06.294 20:23:04 -- common/autotest_common.sh@819 -- # '[' -z 1671417 ']' 00:28:06.294 20:23:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.294 20:23:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:06.294 20:23:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:06.294 20:23:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.294 20:23:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:06.294 20:23:04 -- common/autotest_common.sh@10 -- # set +x 00:28:06.294 [2024-04-25 20:23:04.143016] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:06.294 [2024-04-25 20:23:04.143130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.552 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.552 [2024-04-25 20:23:04.273669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.552 [2024-04-25 20:23:04.379069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:06.552 [2024-04-25 20:23:04.379239] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.553 [2024-04-25 20:23:04.379253] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.553 [2024-04-25 20:23:04.379263] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.553 [2024-04-25 20:23:04.379418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.553 [2024-04-25 20:23:04.379522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.553 [2024-04-25 20:23:04.379607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.553 [2024-04-25 20:23:04.379617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:07.118 20:23:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:07.118 20:23:04 -- common/autotest_common.sh@852 -- # return 0 00:28:07.118 20:23:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:07.118 20:23:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:07.118 20:23:04 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 20:23:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.118 20:23:04 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:07.118 20:23:04 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:07.118 20:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.118 20:23:04 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 Malloc0 00:28:07.118 20:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.118 20:23:04 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:07.118 20:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.118 20:23:04 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 Delay0 00:28:07.118 20:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.118 20:23:04 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.118 20:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.118 20:23:04 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 [2024-04-25 20:23:04.927360] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.118 20:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.118 20:23:04 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:07.118 20:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.118 20:23:04 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 20:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.118 20:23:04 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.118 20:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.118 20:23:04 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 20:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.118 20:23:04 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.118 20:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:07.118 20:23:04 -- common/autotest_common.sh@10 -- # set +x 00:28:07.118 [2024-04-25 20:23:04.955552] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.118 20:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:07.118 20:23:04 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:09.024 20:23:06 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:09.024 20:23:06 -- common/autotest_common.sh@1177 -- # local i=0 00:28:09.024 20:23:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:28:09.024 20:23:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:28:09.024 20:23:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:28:10.935 20:23:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:28:10.935 20:23:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:28:10.935 20:23:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:28:10.935 20:23:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:28:10.935 20:23:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:28:10.935 20:23:08 -- common/autotest_common.sh@1187 -- # return 0 00:28:10.935 20:23:08 -- target/initiator_timeout.sh@35 -- # fio_pid=1672280 00:28:10.935 20:23:08 -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:10.935 20:23:08 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:10.935 [global] 00:28:10.935 thread=1 00:28:10.935 invalidate=1 00:28:10.935 rw=write 00:28:10.935 time_based=1 00:28:10.935 runtime=60 00:28:10.935 ioengine=libaio 00:28:10.935 direct=1 00:28:10.935 bs=4096 00:28:10.935 iodepth=1 00:28:10.935 norandommap=0 00:28:10.935 numjobs=1 00:28:10.935 00:28:10.935 verify_dump=1 00:28:10.935 verify_backlog=512 00:28:10.935 verify_state_save=0 00:28:10.935 do_verify=1 00:28:10.935 verify=crc32c-intel 00:28:10.935 [job0] 00:28:10.935 filename=/dev/nvme0n1 00:28:10.935 Could not set queue depth (nvme0n1) 00:28:10.935 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:10.935 fio-3.35 00:28:10.935 Starting 1 thread 00:28:14.282 20:23:11 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:14.282 20:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.282 20:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:14.282 true 00:28:14.282 20:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.282 20:23:11 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:14.282 20:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.282 20:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:14.282 true 00:28:14.282 20:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.282 20:23:11 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:14.282 20:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.282 20:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:14.282 true 00:28:14.282 20:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.282 20:23:11 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:14.282 20:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.282 20:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:14.282 true 00:28:14.282 20:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.282 20:23:11 -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:16.817 20:23:14 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:16.817 20:23:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.817 20:23:14 -- common/autotest_common.sh@10 -- # set +x 00:28:16.817 true 00:28:16.817 20:23:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.817 20:23:14 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:16.817 20:23:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.817 20:23:14 -- common/autotest_common.sh@10 -- # set +x 00:28:16.817 true 00:28:16.817 20:23:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.817 20:23:14 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:16.817 20:23:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.817 20:23:14 -- common/autotest_common.sh@10 -- # set +x 00:28:16.817 true 00:28:16.817 20:23:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.817 20:23:14 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:16.817 20:23:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.817 20:23:14 -- common/autotest_common.sh@10 -- # set +x 00:28:16.817 true 00:28:16.817 20:23:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.817 20:23:14 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:16.817 20:23:14 -- target/initiator_timeout.sh@54 -- # wait 1672280 00:29:13.058 00:29:13.058 job0: (groupid=0, jobs=1): err= 0: pid=1672504: Thu Apr 25 20:24:09 2024 00:29:13.058 read: IOPS=100, BW=402KiB/s (412kB/s)(23.6MiB/60035msec) 00:29:13.058 slat (usec): min=3, max=2798, avg=15.66, stdev=37.72 00:29:13.058 clat (usec): min=206, max=42027k, avg=9651.13, stdev=540913.78 00:29:13.058 lat (usec): min=213, max=42028k, avg=9666.79, stdev=540913.92 00:29:13.058 clat percentiles (usec): 00:29:13.058 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 245], 00:29:13.058 | 20.00th=[ 258], 30.00th=[ 273], 40.00th=[ 285], 00:29:13.058 | 50.00th=[ 297], 60.00th=[ 338], 70.00th=[ 375], 00:29:13.058 | 80.00th=[ 396], 90.00th=[ 437], 95.00th=[ 41157], 00:29:13.058 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:29:13.058 | 99.95th=[ 42206], 99.99th=[17112761] 00:29:13.058 write: IOPS=102, BW=409KiB/s (419kB/s)(24.0MiB/60035msec); 0 zone resets 00:29:13.058 slat (usec): min=5, max=32338, avg=22.26, stdev=412.57 00:29:13.058 clat (usec): min=149, max=732, avg=239.92, stdev=61.67 00:29:13.058 lat (usec): min=156, max=33070, avg=262.18, stdev=424.57 00:29:13.058 clat percentiles (usec): 00:29:13.058 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 196], 00:29:13.058 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 233], 00:29:13.058 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 343], 00:29:13.058 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 469], 99.95th=[ 529], 00:29:13.058 | 99.99th=[ 734] 00:29:13.058 bw ( KiB/s): min= 160, max= 8896, per=100.00%, avg=4915.20, stdev=2449.61, samples=10 00:29:13.058 iops : min= 40, max= 2224, avg=1228.80, stdev=612.40, samples=10 00:29:13.058 lat (usec) : 250=40.74%, 500=55.88%, 750=0.46%, 1000=0.07% 00:29:13.058 lat (msec) : 2=0.01%, 4=0.01%, 50=2.83%, >=2000=0.01% 00:29:13.058 cpu : usr=0.16%, sys=0.34%, ctx=12185, majf=0, minf=1 00:29:13.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:13.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.058 issued rwts: total=6038,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:13.058 00:29:13.058 Run status group 0 (all jobs): 00:29:13.058 READ: bw=402KiB/s (412kB/s), 402KiB/s-402KiB/s (412kB/s-412kB/s), io=23.6MiB (24.7MB), run=60035-60035msec 00:29:13.058 WRITE: bw=409KiB/s (419kB/s), 409KiB/s-409KiB/s (419kB/s-419kB/s), io=24.0MiB (25.2MB), run=60035-60035msec 00:29:13.058 00:29:13.058 Disk stats (read/write): 00:29:13.058 nvme0n1: ios=6086/6144, merge=0/0, ticks=17383/1428, in_queue=18811, util=99.71% 00:29:13.058 20:24:09 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:13.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:13.058 20:24:09 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:13.058 20:24:09 -- common/autotest_common.sh@1198 -- # local i=0 00:29:13.058 20:24:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:29:13.058 20:24:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:13.058 20:24:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:29:13.058 20:24:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:13.058 20:24:09 -- common/autotest_common.sh@1210 -- # return 0 00:29:13.058 20:24:09 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:13.058 20:24:09 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:13.058 nvmf hotplug test: fio successful as expected 00:29:13.058 20:24:09 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.058 20:24:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.058 20:24:09 -- common/autotest_common.sh@10 -- # set +x 00:29:13.058 20:24:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.058 20:24:09 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:13.058 20:24:09 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:13.058 20:24:09 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:13.058 20:24:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:13.058 20:24:09 -- nvmf/common.sh@116 -- # sync 00:29:13.058 20:24:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:13.058 20:24:09 -- nvmf/common.sh@119 -- # set +e 00:29:13.058 20:24:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:13.058 20:24:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:13.058 rmmod nvme_tcp 00:29:13.058 rmmod nvme_fabrics 00:29:13.058 rmmod nvme_keyring 00:29:13.058 20:24:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:13.058 20:24:09 -- nvmf/common.sh@123 -- # set -e 00:29:13.058 20:24:09 -- nvmf/common.sh@124 -- # return 0 00:29:13.058 20:24:09 -- nvmf/common.sh@477 -- # '[' -n 1671417 ']' 00:29:13.058 20:24:09 -- nvmf/common.sh@478 -- # killprocess 1671417 00:29:13.058 20:24:09 -- common/autotest_common.sh@926 -- # '[' -z 1671417 ']' 00:29:13.058 20:24:09 -- common/autotest_common.sh@930 -- # kill -0 1671417 00:29:13.058 20:24:09 -- common/autotest_common.sh@931 -- # uname 00:29:13.058 20:24:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:13.058 20:24:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1671417 00:29:13.058 20:24:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:13.058 20:24:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:13.058 20:24:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1671417' 00:29:13.058 killing process with pid 1671417 00:29:13.058 20:24:09 -- common/autotest_common.sh@945 -- # kill 1671417 00:29:13.058 20:24:09 -- common/autotest_common.sh@950 -- # wait 1671417 00:29:13.058 20:24:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:13.058 20:24:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:13.058 20:24:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:13.058 20:24:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:13.058 20:24:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:13.058 20:24:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.058 20:24:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:13.058 20:24:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.995 20:24:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:13.995 00:29:13.995 real 1m13.224s 00:29:13.995 user 4m34.363s 00:29:13.995 sys 0m5.430s 00:29:13.995 20:24:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:13.995 20:24:11 -- common/autotest_common.sh@10 -- # set +x 00:29:13.995 ************************************ 00:29:13.995 END TEST nvmf_initiator_timeout 00:29:13.995 ************************************ 00:29:14.254 20:24:11 -- nvmf/nvmf.sh@69 -- # [[ phy-fallback == phy ]] 00:29:14.254 20:24:11 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:29:14.254 20:24:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:14.254 20:24:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.254 20:24:11 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:29:14.254 20:24:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:14.254 20:24:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.254 20:24:11 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:29:14.254 20:24:11 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:14.254 20:24:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:14.254 20:24:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.254 20:24:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.254 ************************************ 00:29:14.254 START TEST nvmf_multicontroller 00:29:14.254 ************************************ 00:29:14.254 20:24:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:14.254 * Looking for test storage... 00:29:14.254 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:14.254 20:24:12 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.254 20:24:12 -- nvmf/common.sh@7 -- # uname -s 00:29:14.254 20:24:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.254 20:24:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.254 20:24:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.254 20:24:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.254 20:24:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.254 20:24:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.254 20:24:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.254 20:24:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.254 20:24:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.254 20:24:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.254 20:24:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:14.254 20:24:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:14.254 20:24:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.254 20:24:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.254 20:24:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:14.254 20:24:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:14.254 20:24:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.254 20:24:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.254 20:24:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.254 20:24:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.254 20:24:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.254 20:24:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.254 20:24:12 -- paths/export.sh@5 -- # export PATH 00:29:14.254 20:24:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.254 20:24:12 -- nvmf/common.sh@46 -- # : 0 00:29:14.254 20:24:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:14.254 20:24:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:14.254 20:24:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:14.254 20:24:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.254 20:24:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.254 20:24:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:14.254 20:24:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:14.254 20:24:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:14.254 20:24:12 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:14.254 20:24:12 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:14.254 20:24:12 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:14.254 20:24:12 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:14.254 20:24:12 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:14.254 20:24:12 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:14.254 20:24:12 -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:14.254 20:24:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:14.254 20:24:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.254 20:24:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:14.254 20:24:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:14.254 20:24:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:14.254 20:24:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.254 20:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.254 20:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.254 20:24:12 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:14.254 20:24:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:14.254 20:24:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:14.254 20:24:12 -- common/autotest_common.sh@10 -- # set +x 00:29:20.833 20:24:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:20.833 20:24:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:20.833 20:24:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:20.833 20:24:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:20.833 20:24:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:20.833 20:24:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:20.833 20:24:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:20.833 20:24:17 -- nvmf/common.sh@294 -- # net_devs=() 00:29:20.833 20:24:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:20.833 20:24:17 -- nvmf/common.sh@295 -- # e810=() 00:29:20.833 20:24:17 -- nvmf/common.sh@295 -- # local -ga e810 00:29:20.833 20:24:17 -- nvmf/common.sh@296 -- # x722=() 00:29:20.833 20:24:17 -- nvmf/common.sh@296 -- # local -ga x722 00:29:20.833 20:24:17 -- nvmf/common.sh@297 -- # mlx=() 00:29:20.833 20:24:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:20.833 20:24:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.833 20:24:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:20.833 20:24:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:20.833 20:24:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:20.833 20:24:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:20.833 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:20.833 20:24:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:20.833 20:24:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:20.833 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:20.833 20:24:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:20.833 20:24:17 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:20.833 20:24:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:20.833 20:24:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.833 20:24:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:20.833 20:24:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.833 20:24:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:20.833 Found net devices under 0000:27:00.0: cvl_0_0 00:29:20.833 20:24:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.833 20:24:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:20.834 20:24:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.834 20:24:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:20.834 20:24:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.834 20:24:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:20.834 Found net devices under 0000:27:00.1: cvl_0_1 00:29:20.834 20:24:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.834 20:24:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:20.834 20:24:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:20.834 20:24:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:20.834 20:24:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:20.834 20:24:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:20.834 20:24:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.834 20:24:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.834 20:24:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.834 20:24:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:20.834 20:24:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.834 20:24:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.834 20:24:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:20.834 20:24:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.834 20:24:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.834 20:24:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:20.834 20:24:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:20.834 20:24:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.834 20:24:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.834 20:24:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.834 20:24:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.834 20:24:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:20.834 20:24:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.834 20:24:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.834 20:24:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.834 20:24:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:20.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:29:20.834 00:29:20.834 --- 10.0.0.2 ping statistics --- 00:29:20.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.834 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:29:20.834 20:24:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:29:20.834 00:29:20.834 --- 10.0.0.1 ping statistics --- 00:29:20.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.834 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:29:20.834 20:24:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.834 20:24:18 -- nvmf/common.sh@410 -- # return 0 00:29:20.834 20:24:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:20.834 20:24:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.834 20:24:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:20.834 20:24:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:20.834 20:24:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.834 20:24:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:20.834 20:24:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:20.834 20:24:18 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:20.834 20:24:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:20.834 20:24:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:20.834 20:24:18 -- common/autotest_common.sh@10 -- # set +x 00:29:20.834 20:24:18 -- nvmf/common.sh@469 -- # nvmfpid=1688705 00:29:20.834 20:24:18 -- nvmf/common.sh@470 -- # waitforlisten 1688705 00:29:20.834 20:24:18 -- common/autotest_common.sh@819 -- # '[' -z 1688705 ']' 00:29:20.834 20:24:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.834 20:24:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:20.834 20:24:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.834 20:24:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:20.834 20:24:18 -- common/autotest_common.sh@10 -- # set +x 00:29:20.834 20:24:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:20.834 [2024-04-25 20:24:18.413498] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:20.834 [2024-04-25 20:24:18.413627] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.834 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.834 [2024-04-25 20:24:18.554729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:20.834 [2024-04-25 20:24:18.647744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:20.834 [2024-04-25 20:24:18.647932] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.834 [2024-04-25 20:24:18.647946] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.834 [2024-04-25 20:24:18.647957] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.834 [2024-04-25 20:24:18.648026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.834 [2024-04-25 20:24:18.648133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.834 [2024-04-25 20:24:18.648143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.404 20:24:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:21.404 20:24:19 -- common/autotest_common.sh@852 -- # return 0 00:29:21.404 20:24:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:21.404 20:24:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 20:24:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.404 20:24:19 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 [2024-04-25 20:24:19.177832] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 Malloc0 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 [2024-04-25 20:24:19.259502] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 [2024-04-25 20:24:19.267374] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 Malloc1 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.404 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.404 20:24:19 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:21.404 20:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.404 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.665 20:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.665 20:24:19 -- host/multicontroller.sh@44 -- # bdevperf_pid=1688982 00:29:21.665 20:24:19 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:21.665 20:24:19 -- host/multicontroller.sh@47 -- # waitforlisten 1688982 /var/tmp/bdevperf.sock 00:29:21.665 20:24:19 -- common/autotest_common.sh@819 -- # '[' -z 1688982 ']' 00:29:21.665 20:24:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:21.665 20:24:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:21.665 20:24:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:21.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:21.665 20:24:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:21.665 20:24:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.665 20:24:19 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:22.236 20:24:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:22.236 20:24:20 -- common/autotest_common.sh@852 -- # return 0 00:29:22.236 20:24:20 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:22.236 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.236 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.497 NVMe0n1 00:29:22.497 20:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.497 20:24:20 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:22.497 20:24:20 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:22.497 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.497 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.497 20:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.497 1 00:29:22.497 20:24:20 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:22.497 20:24:20 -- common/autotest_common.sh@640 -- # local es=0 00:29:22.497 20:24:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:22.497 20:24:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:22.497 20:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.497 20:24:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:22.497 20:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.497 20:24:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:22.497 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.497 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.497 request: 00:29:22.497 { 00:29:22.497 "name": "NVMe0", 00:29:22.497 "trtype": "tcp", 00:29:22.497 "traddr": "10.0.0.2", 00:29:22.497 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:22.497 "hostaddr": "10.0.0.2", 00:29:22.497 "hostsvcid": "60000", 00:29:22.497 "adrfam": "ipv4", 00:29:22.497 "trsvcid": "4420", 00:29:22.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.497 "method": "bdev_nvme_attach_controller", 00:29:22.497 "req_id": 1 00:29:22.497 } 00:29:22.497 Got JSON-RPC error response 00:29:22.497 response: 00:29:22.497 { 00:29:22.497 "code": -114, 00:29:22.497 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:22.497 } 00:29:22.497 20:24:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:22.497 20:24:20 -- common/autotest_common.sh@643 -- # es=1 00:29:22.497 20:24:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:22.497 20:24:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:22.497 20:24:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:22.497 20:24:20 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:22.497 20:24:20 -- common/autotest_common.sh@640 -- # local es=0 00:29:22.497 20:24:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:22.497 20:24:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:22.497 20:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.497 20:24:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:22.497 20:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.497 20:24:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:22.497 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.497 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.497 request: 00:29:22.497 { 00:29:22.497 "name": "NVMe0", 00:29:22.497 "trtype": "tcp", 00:29:22.497 "traddr": "10.0.0.2", 00:29:22.497 "hostaddr": "10.0.0.2", 00:29:22.497 "hostsvcid": "60000", 00:29:22.497 "adrfam": "ipv4", 00:29:22.497 "trsvcid": "4420", 00:29:22.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:22.497 "method": "bdev_nvme_attach_controller", 00:29:22.497 "req_id": 1 00:29:22.497 } 00:29:22.497 Got JSON-RPC error response 00:29:22.497 response: 00:29:22.497 { 00:29:22.497 "code": -114, 00:29:22.497 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:22.497 } 00:29:22.497 20:24:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:22.497 20:24:20 -- common/autotest_common.sh@643 -- # es=1 00:29:22.497 20:24:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:22.497 20:24:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:22.497 20:24:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:22.497 20:24:20 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:22.497 20:24:20 -- common/autotest_common.sh@640 -- # local es=0 00:29:22.497 20:24:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:22.497 20:24:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:22.497 20:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.497 20:24:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:22.497 20:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.497 20:24:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:22.497 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.497 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.497 request: 00:29:22.497 { 00:29:22.497 "name": "NVMe0", 00:29:22.497 "trtype": "tcp", 00:29:22.497 "traddr": "10.0.0.2", 00:29:22.497 "hostaddr": "10.0.0.2", 00:29:22.497 "hostsvcid": "60000", 00:29:22.497 "adrfam": "ipv4", 00:29:22.497 "trsvcid": "4420", 00:29:22.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.497 "multipath": "disable", 00:29:22.497 "method": "bdev_nvme_attach_controller", 00:29:22.497 "req_id": 1 00:29:22.497 } 00:29:22.497 Got JSON-RPC error response 00:29:22.497 response: 00:29:22.497 { 00:29:22.497 "code": -114, 00:29:22.497 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:22.497 } 00:29:22.497 20:24:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:22.497 20:24:20 -- common/autotest_common.sh@643 -- # es=1 00:29:22.497 20:24:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:22.497 20:24:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:22.497 20:24:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:22.497 20:24:20 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:22.497 20:24:20 -- common/autotest_common.sh@640 -- # local es=0 00:29:22.498 20:24:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:22.498 20:24:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:22.498 20:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.498 20:24:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:22.498 20:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.498 20:24:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:22.498 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.498 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.498 request: 00:29:22.498 { 00:29:22.498 "name": "NVMe0", 00:29:22.498 "trtype": "tcp", 00:29:22.498 "traddr": "10.0.0.2", 00:29:22.498 "hostaddr": "10.0.0.2", 00:29:22.498 "hostsvcid": "60000", 00:29:22.498 "adrfam": "ipv4", 00:29:22.498 "trsvcid": "4420", 00:29:22.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.498 "multipath": "failover", 00:29:22.498 "method": "bdev_nvme_attach_controller", 00:29:22.498 "req_id": 1 00:29:22.498 } 00:29:22.498 Got JSON-RPC error response 00:29:22.498 response: 00:29:22.498 { 00:29:22.498 "code": -114, 00:29:22.498 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:22.498 } 00:29:22.498 20:24:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:22.498 20:24:20 -- common/autotest_common.sh@643 -- # es=1 00:29:22.498 20:24:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:22.498 20:24:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:22.498 20:24:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:22.498 20:24:20 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:22.498 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.498 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.758 00:29:22.758 20:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.758 20:24:20 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:22.758 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.758 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.758 20:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.758 20:24:20 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:22.758 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.758 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.758 00:29:22.758 20:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.758 20:24:20 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:22.758 20:24:20 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:22.758 20:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.758 20:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:23.017 20:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.017 20:24:20 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:23.017 20:24:20 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:24.011 0 00:29:24.011 20:24:21 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:24.011 20:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.011 20:24:21 -- common/autotest_common.sh@10 -- # set +x 00:29:24.011 20:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.011 20:24:21 -- host/multicontroller.sh@100 -- # killprocess 1688982 00:29:24.011 20:24:21 -- common/autotest_common.sh@926 -- # '[' -z 1688982 ']' 00:29:24.011 20:24:21 -- common/autotest_common.sh@930 -- # kill -0 1688982 00:29:24.011 20:24:21 -- common/autotest_common.sh@931 -- # uname 00:29:24.011 20:24:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.011 20:24:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1688982 00:29:24.011 20:24:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:24.011 20:24:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:24.011 20:24:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1688982' 00:29:24.011 killing process with pid 1688982 00:29:24.011 20:24:21 -- common/autotest_common.sh@945 -- # kill 1688982 00:29:24.011 20:24:21 -- common/autotest_common.sh@950 -- # wait 1688982 00:29:24.586 20:24:22 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.586 20:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.586 20:24:22 -- common/autotest_common.sh@10 -- # set +x 00:29:24.586 20:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.586 20:24:22 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:24.586 20:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.586 20:24:22 -- common/autotest_common.sh@10 -- # set +x 00:29:24.586 20:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.586 20:24:22 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:24.586 20:24:22 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:24.586 20:24:22 -- common/autotest_common.sh@1597 -- # read -r file 00:29:24.586 20:24:22 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:24.586 20:24:22 -- common/autotest_common.sh@1596 -- # sort -u 00:29:24.586 20:24:22 -- common/autotest_common.sh@1598 -- # cat 00:29:24.586 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:24.586 [2024-04-25 20:24:19.411682] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:24.586 [2024-04-25 20:24:19.411796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688982 ] 00:29:24.586 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.586 [2024-04-25 20:24:19.527206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.586 [2024-04-25 20:24:19.616621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.586 [2024-04-25 20:24:20.671349] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bb0ecfae-22ab-4ed1-a814-e2429087275f already exists 00:29:24.586 [2024-04-25 20:24:20.671391] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:bb0ecfae-22ab-4ed1-a814-e2429087275f alias for bdev NVMe1n1 00:29:24.586 [2024-04-25 20:24:20.671409] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:24.586 Running I/O for 1 seconds... 00:29:24.586 00:29:24.586 Latency(us) 00:29:24.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.586 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:24.586 NVMe0n1 : 1.00 25222.87 98.53 0.00 0.00 5059.27 1612.53 6760.56 00:29:24.586 =================================================================================================================== 00:29:24.586 Total : 25222.87 98.53 0.00 0.00 5059.27 1612.53 6760.56 00:29:24.586 Received shutdown signal, test time was about 1.000000 seconds 00:29:24.586 00:29:24.586 Latency(us) 00:29:24.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.586 =================================================================================================================== 00:29:24.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.586 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:24.586 20:24:22 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:24.586 20:24:22 -- common/autotest_common.sh@1597 -- # read -r file 00:29:24.586 20:24:22 -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:24.586 20:24:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:24.586 20:24:22 -- nvmf/common.sh@116 -- # sync 00:29:24.586 20:24:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:24.586 20:24:22 -- nvmf/common.sh@119 -- # set +e 00:29:24.586 20:24:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:24.586 20:24:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:24.586 rmmod nvme_tcp 00:29:24.586 rmmod nvme_fabrics 00:29:24.586 rmmod nvme_keyring 00:29:24.586 20:24:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:24.586 20:24:22 -- nvmf/common.sh@123 -- # set -e 00:29:24.586 20:24:22 -- nvmf/common.sh@124 -- # return 0 00:29:24.586 20:24:22 -- nvmf/common.sh@477 -- # '[' -n 1688705 ']' 00:29:24.586 20:24:22 -- nvmf/common.sh@478 -- # killprocess 1688705 00:29:24.586 20:24:22 -- common/autotest_common.sh@926 -- # '[' -z 1688705 ']' 00:29:24.586 20:24:22 -- common/autotest_common.sh@930 -- # kill -0 1688705 00:29:24.586 20:24:22 -- common/autotest_common.sh@931 -- # uname 00:29:24.586 20:24:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.586 20:24:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1688705 00:29:24.586 20:24:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:24.586 20:24:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:24.586 20:24:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1688705' 00:29:24.586 killing process with pid 1688705 00:29:24.586 20:24:22 -- common/autotest_common.sh@945 -- # kill 1688705 00:29:24.586 20:24:22 -- common/autotest_common.sh@950 -- # wait 1688705 00:29:25.157 20:24:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:25.157 20:24:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:25.157 20:24:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:25.157 20:24:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.157 20:24:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:25.157 20:24:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.157 20:24:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.157 20:24:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.693 20:24:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:27.693 00:29:27.693 real 0m13.032s 00:29:27.693 user 0m17.229s 00:29:27.693 sys 0m5.589s 00:29:27.693 20:24:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.693 20:24:25 -- common/autotest_common.sh@10 -- # set +x 00:29:27.693 ************************************ 00:29:27.693 END TEST nvmf_multicontroller 00:29:27.693 ************************************ 00:29:27.693 20:24:25 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:27.693 20:24:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:27.693 20:24:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:27.693 20:24:25 -- common/autotest_common.sh@10 -- # set +x 00:29:27.693 ************************************ 00:29:27.693 START TEST nvmf_aer 00:29:27.693 ************************************ 00:29:27.693 20:24:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:27.693 * Looking for test storage... 00:29:27.693 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:27.693 20:24:25 -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.693 20:24:25 -- nvmf/common.sh@7 -- # uname -s 00:29:27.693 20:24:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.693 20:24:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.693 20:24:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.693 20:24:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.693 20:24:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.693 20:24:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.693 20:24:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.693 20:24:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.693 20:24:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.693 20:24:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.693 20:24:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:27.693 20:24:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:27.693 20:24:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.693 20:24:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.693 20:24:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:27.693 20:24:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:27.693 20:24:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.693 20:24:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.693 20:24:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.693 20:24:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.693 20:24:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.693 20:24:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.693 20:24:25 -- paths/export.sh@5 -- # export PATH 00:29:27.693 20:24:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.693 20:24:25 -- nvmf/common.sh@46 -- # : 0 00:29:27.693 20:24:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:27.693 20:24:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:27.693 20:24:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:27.693 20:24:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.693 20:24:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.693 20:24:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:27.693 20:24:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:27.693 20:24:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:27.693 20:24:25 -- host/aer.sh@11 -- # nvmftestinit 00:29:27.693 20:24:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:27.694 20:24:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.694 20:24:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:27.694 20:24:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:27.694 20:24:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:27.694 20:24:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.694 20:24:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.694 20:24:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.694 20:24:25 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:27.694 20:24:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:27.694 20:24:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:27.694 20:24:25 -- common/autotest_common.sh@10 -- # set +x 00:29:34.268 20:24:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:34.268 20:24:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:34.268 20:24:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:34.268 20:24:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:34.268 20:24:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:34.268 20:24:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:34.268 20:24:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:34.268 20:24:31 -- nvmf/common.sh@294 -- # net_devs=() 00:29:34.268 20:24:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:34.268 20:24:31 -- nvmf/common.sh@295 -- # e810=() 00:29:34.268 20:24:31 -- nvmf/common.sh@295 -- # local -ga e810 00:29:34.268 20:24:31 -- nvmf/common.sh@296 -- # x722=() 00:29:34.268 20:24:31 -- nvmf/common.sh@296 -- # local -ga x722 00:29:34.268 20:24:31 -- nvmf/common.sh@297 -- # mlx=() 00:29:34.268 20:24:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:34.268 20:24:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.268 20:24:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:34.268 20:24:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:34.268 20:24:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:34.268 20:24:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:34.268 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:34.268 20:24:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:34.268 20:24:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:34.268 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:34.268 20:24:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:34.268 20:24:31 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:34.268 20:24:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.268 20:24:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:34.268 20:24:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.268 20:24:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:34.268 Found net devices under 0000:27:00.0: cvl_0_0 00:29:34.268 20:24:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.268 20:24:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:34.268 20:24:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.268 20:24:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:34.268 20:24:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.268 20:24:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:34.268 Found net devices under 0000:27:00.1: cvl_0_1 00:29:34.268 20:24:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.268 20:24:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:34.268 20:24:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:34.268 20:24:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:34.268 20:24:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.268 20:24:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.268 20:24:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.268 20:24:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:34.268 20:24:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.268 20:24:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.268 20:24:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:34.268 20:24:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.268 20:24:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.268 20:24:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:34.268 20:24:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:34.268 20:24:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.268 20:24:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.268 20:24:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.268 20:24:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.268 20:24:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:34.268 20:24:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.268 20:24:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.268 20:24:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.268 20:24:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:34.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:29:34.268 00:29:34.268 --- 10.0.0.2 ping statistics --- 00:29:34.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.268 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:34.268 20:24:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:29:34.268 00:29:34.268 --- 10.0.0.1 ping statistics --- 00:29:34.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.268 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:29:34.268 20:24:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.268 20:24:31 -- nvmf/common.sh@410 -- # return 0 00:29:34.268 20:24:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:34.268 20:24:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.268 20:24:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:34.268 20:24:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.268 20:24:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:34.268 20:24:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:34.268 20:24:31 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:34.268 20:24:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:34.268 20:24:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:34.268 20:24:31 -- common/autotest_common.sh@10 -- # set +x 00:29:34.268 20:24:31 -- nvmf/common.sh@469 -- # nvmfpid=1693703 00:29:34.268 20:24:31 -- nvmf/common.sh@470 -- # waitforlisten 1693703 00:29:34.268 20:24:31 -- common/autotest_common.sh@819 -- # '[' -z 1693703 ']' 00:29:34.268 20:24:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.268 20:24:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:34.268 20:24:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.268 20:24:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:34.268 20:24:31 -- common/autotest_common.sh@10 -- # set +x 00:29:34.268 20:24:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:34.268 [2024-04-25 20:24:31.617731] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:34.268 [2024-04-25 20:24:31.617861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.268 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.268 [2024-04-25 20:24:31.762852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.268 [2024-04-25 20:24:31.857072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:34.268 [2024-04-25 20:24:31.857273] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.268 [2024-04-25 20:24:31.857290] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.268 [2024-04-25 20:24:31.857300] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.268 [2024-04-25 20:24:31.857391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.268 [2024-04-25 20:24:31.857488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.268 [2024-04-25 20:24:31.857603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.268 [2024-04-25 20:24:31.857612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.529 20:24:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:34.529 20:24:32 -- common/autotest_common.sh@852 -- # return 0 00:29:34.529 20:24:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:34.529 20:24:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:34.529 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.529 20:24:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.529 20:24:32 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.529 20:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.529 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.529 [2024-04-25 20:24:32.373670] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.529 20:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.529 20:24:32 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:34.529 20:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.529 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.529 Malloc0 00:29:34.529 20:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.529 20:24:32 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:34.529 20:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.529 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.529 20:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.529 20:24:32 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.529 20:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.529 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.529 20:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.529 20:24:32 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.529 20:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.529 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.529 [2024-04-25 20:24:32.442311] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.529 20:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.529 20:24:32 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:34.529 20:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.529 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.529 [2024-04-25 20:24:32.450023] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:34.529 [ 00:29:34.529 { 00:29:34.529 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:34.529 "subtype": "Discovery", 00:29:34.529 "listen_addresses": [], 00:29:34.529 "allow_any_host": true, 00:29:34.529 "hosts": [] 00:29:34.529 }, 00:29:34.529 { 00:29:34.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.529 "subtype": "NVMe", 00:29:34.529 "listen_addresses": [ 00:29:34.529 { 00:29:34.529 "transport": "TCP", 00:29:34.529 "trtype": "TCP", 00:29:34.529 "adrfam": "IPv4", 00:29:34.529 "traddr": "10.0.0.2", 00:29:34.529 "trsvcid": "4420" 00:29:34.529 } 00:29:34.529 ], 00:29:34.529 "allow_any_host": true, 00:29:34.529 "hosts": [], 00:29:34.529 "serial_number": "SPDK00000000000001", 00:29:34.529 "model_number": "SPDK bdev Controller", 00:29:34.529 "max_namespaces": 2, 00:29:34.529 "min_cntlid": 1, 00:29:34.529 "max_cntlid": 65519, 00:29:34.529 "namespaces": [ 00:29:34.529 { 00:29:34.529 "nsid": 1, 00:29:34.529 "bdev_name": "Malloc0", 00:29:34.529 "name": "Malloc0", 00:29:34.529 "nguid": "1CEA9FE0EE6040478FC631B7C6FC66AD", 00:29:34.529 "uuid": "1cea9fe0-ee60-4047-8fc6-31b7c6fc66ad" 00:29:34.529 } 00:29:34.529 ] 00:29:34.529 } 00:29:34.529 ] 00:29:34.529 20:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.529 20:24:32 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:34.529 20:24:32 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:34.790 20:24:32 -- host/aer.sh@33 -- # aerpid=1693853 00:29:34.790 20:24:32 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:34.790 20:24:32 -- common/autotest_common.sh@1244 -- # local i=0 00:29:34.790 20:24:32 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.790 20:24:32 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:29:34.790 20:24:32 -- common/autotest_common.sh@1247 -- # i=1 00:29:34.790 20:24:32 -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:34.790 20:24:32 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:34.790 20:24:32 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.790 20:24:32 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:29:34.790 20:24:32 -- common/autotest_common.sh@1247 -- # i=2 00:29:34.791 20:24:32 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:34.791 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.791 20:24:32 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.791 20:24:32 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:29:34.791 20:24:32 -- common/autotest_common.sh@1247 -- # i=3 00:29:34.791 20:24:32 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:35.051 20:24:32 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:35.051 20:24:32 -- common/autotest_common.sh@1246 -- # '[' 3 -lt 200 ']' 00:29:35.051 20:24:32 -- common/autotest_common.sh@1247 -- # i=4 00:29:35.051 20:24:32 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:35.051 20:24:32 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:35.051 20:24:32 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:35.051 20:24:32 -- common/autotest_common.sh@1255 -- # return 0 00:29:35.051 20:24:32 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:35.051 20:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.051 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:35.051 Malloc1 00:29:35.051 20:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.051 20:24:32 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:35.051 20:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.051 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:35.051 20:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.051 20:24:32 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:35.051 20:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.051 20:24:32 -- common/autotest_common.sh@10 -- # set +x 00:29:35.051 [ 00:29:35.051 { 00:29:35.051 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:35.051 "subtype": "Discovery", 00:29:35.051 "listen_addresses": [], 00:29:35.051 "allow_any_host": true, 00:29:35.051 "hosts": [] 00:29:35.051 }, 00:29:35.051 { 00:29:35.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.051 "subtype": "NVMe", 00:29:35.051 "listen_addresses": [ 00:29:35.052 { 00:29:35.052 "transport": "TCP", 00:29:35.052 "trtype": "TCP", 00:29:35.052 "adrfam": "IPv4", 00:29:35.052 "traddr": "10.0.0.2", 00:29:35.052 "trsvcid": "4420" 00:29:35.052 } 00:29:35.052 ], 00:29:35.052 "allow_any_host": true, 00:29:35.052 "hosts": [], 00:29:35.052 "serial_number": "SPDK00000000000001", 00:29:35.052 "model_number": "SPDK bdev Controller", 00:29:35.052 "max_namespaces": 2, 00:29:35.052 "min_cntlid": 1, 00:29:35.052 "max_cntlid": 65519, 00:29:35.052 "namespaces": [ 00:29:35.052 { 00:29:35.052 "nsid": 1, 00:29:35.052 "bdev_name": "Malloc0", 00:29:35.052 "name": "Malloc0", 00:29:35.052 "nguid": "1CEA9FE0EE6040478FC631B7C6FC66AD", 00:29:35.052 "uuid": "1cea9fe0-ee60-4047-8fc6-31b7c6fc66ad" 00:29:35.052 }, 00:29:35.052 { 00:29:35.052 "nsid": 2, 00:29:35.052 "bdev_name": "Malloc1", 00:29:35.052 "name": "Malloc1", 00:29:35.052 "nguid": "F25A259015704F2A96D5794CE78AF20E", 00:29:35.052 "uuid": "f25a2590-1570-4f2a-96d5-794ce78af20e" 00:29:35.052 } 00:29:35.052 ] 00:29:35.052 } 00:29:35.052 ] 00:29:35.052 20:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.052 20:24:32 -- host/aer.sh@43 -- # wait 1693853 00:29:35.311 Asynchronous Event Request test 00:29:35.311 Attaching to 10.0.0.2 00:29:35.311 Attached to 10.0.0.2 00:29:35.311 Registering asynchronous event callbacks... 00:29:35.311 Starting namespace attribute notice tests for all controllers... 00:29:35.311 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:35.311 aer_cb - Changed Namespace 00:29:35.311 Cleaning up... 00:29:35.311 20:24:33 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:35.311 20:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.311 20:24:33 -- common/autotest_common.sh@10 -- # set +x 00:29:35.311 20:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.311 20:24:33 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:35.311 20:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.311 20:24:33 -- common/autotest_common.sh@10 -- # set +x 00:29:35.311 20:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.311 20:24:33 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.311 20:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.311 20:24:33 -- common/autotest_common.sh@10 -- # set +x 00:29:35.311 20:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.311 20:24:33 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:35.311 20:24:33 -- host/aer.sh@51 -- # nvmftestfini 00:29:35.311 20:24:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:35.311 20:24:33 -- nvmf/common.sh@116 -- # sync 00:29:35.311 20:24:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:35.311 20:24:33 -- nvmf/common.sh@119 -- # set +e 00:29:35.311 20:24:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:35.311 20:24:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:35.311 rmmod nvme_tcp 00:29:35.311 rmmod nvme_fabrics 00:29:35.311 rmmod nvme_keyring 00:29:35.311 20:24:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:35.311 20:24:33 -- nvmf/common.sh@123 -- # set -e 00:29:35.311 20:24:33 -- nvmf/common.sh@124 -- # return 0 00:29:35.311 20:24:33 -- nvmf/common.sh@477 -- # '[' -n 1693703 ']' 00:29:35.311 20:24:33 -- nvmf/common.sh@478 -- # killprocess 1693703 00:29:35.311 20:24:33 -- common/autotest_common.sh@926 -- # '[' -z 1693703 ']' 00:29:35.311 20:24:33 -- common/autotest_common.sh@930 -- # kill -0 1693703 00:29:35.311 20:24:33 -- common/autotest_common.sh@931 -- # uname 00:29:35.311 20:24:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:35.311 20:24:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1693703 00:29:35.572 20:24:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:35.572 20:24:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:35.572 20:24:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1693703' 00:29:35.572 killing process with pid 1693703 00:29:35.572 20:24:33 -- common/autotest_common.sh@945 -- # kill 1693703 00:29:35.572 [2024-04-25 20:24:33.266701] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:35.572 20:24:33 -- common/autotest_common.sh@950 -- # wait 1693703 00:29:35.833 20:24:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:35.833 20:24:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:35.833 20:24:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:35.833 20:24:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:35.833 20:24:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:35.833 20:24:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.833 20:24:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:35.833 20:24:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.387 20:24:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:38.387 00:29:38.387 real 0m10.756s 00:29:38.387 user 0m8.836s 00:29:38.387 sys 0m5.268s 00:29:38.387 20:24:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:38.387 20:24:35 -- common/autotest_common.sh@10 -- # set +x 00:29:38.387 ************************************ 00:29:38.387 END TEST nvmf_aer 00:29:38.387 ************************************ 00:29:38.387 20:24:35 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:38.387 20:24:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:38.387 20:24:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:38.387 20:24:35 -- common/autotest_common.sh@10 -- # set +x 00:29:38.387 ************************************ 00:29:38.387 START TEST nvmf_async_init 00:29:38.387 ************************************ 00:29:38.387 20:24:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:38.387 * Looking for test storage... 00:29:38.387 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:38.387 20:24:35 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.387 20:24:35 -- nvmf/common.sh@7 -- # uname -s 00:29:38.387 20:24:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.387 20:24:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.387 20:24:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.387 20:24:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.387 20:24:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.387 20:24:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.387 20:24:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.387 20:24:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.387 20:24:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.387 20:24:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.387 20:24:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:38.387 20:24:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:38.387 20:24:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.387 20:24:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.387 20:24:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:38.387 20:24:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:38.387 20:24:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.387 20:24:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.387 20:24:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.387 20:24:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.387 20:24:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.387 20:24:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.387 20:24:35 -- paths/export.sh@5 -- # export PATH 00:29:38.387 20:24:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.387 20:24:35 -- nvmf/common.sh@46 -- # : 0 00:29:38.387 20:24:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:38.387 20:24:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:38.387 20:24:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:38.387 20:24:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.387 20:24:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.387 20:24:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:38.387 20:24:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:38.387 20:24:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:38.387 20:24:35 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:38.387 20:24:35 -- host/async_init.sh@14 -- # null_block_size=512 00:29:38.387 20:24:35 -- host/async_init.sh@15 -- # null_bdev=null0 00:29:38.387 20:24:35 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:38.387 20:24:35 -- host/async_init.sh@20 -- # uuidgen 00:29:38.387 20:24:35 -- host/async_init.sh@20 -- # tr -d - 00:29:38.387 20:24:35 -- host/async_init.sh@20 -- # nguid=00704d6906e944c4a16b81d1f8ac7d53 00:29:38.387 20:24:35 -- host/async_init.sh@22 -- # nvmftestinit 00:29:38.387 20:24:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:38.387 20:24:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.387 20:24:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:38.387 20:24:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:38.387 20:24:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:38.387 20:24:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.387 20:24:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:38.387 20:24:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.387 20:24:35 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:38.387 20:24:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:38.387 20:24:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:38.387 20:24:35 -- common/autotest_common.sh@10 -- # set +x 00:29:44.971 20:24:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:44.971 20:24:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:44.971 20:24:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:44.971 20:24:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:44.971 20:24:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:44.971 20:24:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:44.971 20:24:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:44.971 20:24:41 -- nvmf/common.sh@294 -- # net_devs=() 00:29:44.971 20:24:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:44.971 20:24:41 -- nvmf/common.sh@295 -- # e810=() 00:29:44.971 20:24:41 -- nvmf/common.sh@295 -- # local -ga e810 00:29:44.971 20:24:41 -- nvmf/common.sh@296 -- # x722=() 00:29:44.971 20:24:41 -- nvmf/common.sh@296 -- # local -ga x722 00:29:44.971 20:24:41 -- nvmf/common.sh@297 -- # mlx=() 00:29:44.971 20:24:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:44.971 20:24:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.971 20:24:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:44.971 20:24:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:44.971 20:24:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:44.971 20:24:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:44.971 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:44.971 20:24:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:44.971 20:24:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:44.971 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:44.971 20:24:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:44.971 20:24:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.972 20:24:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.972 20:24:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:44.972 20:24:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:44.972 20:24:41 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:44.972 20:24:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:44.972 20:24:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.972 20:24:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:44.972 20:24:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.972 20:24:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:44.972 Found net devices under 0000:27:00.0: cvl_0_0 00:29:44.972 20:24:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.972 20:24:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:44.972 20:24:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.972 20:24:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:44.972 20:24:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.972 20:24:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:44.972 Found net devices under 0000:27:00.1: cvl_0_1 00:29:44.972 20:24:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.972 20:24:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:44.972 20:24:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:44.972 20:24:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:44.972 20:24:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:44.972 20:24:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:44.972 20:24:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.972 20:24:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.972 20:24:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.972 20:24:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:44.972 20:24:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.972 20:24:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.972 20:24:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:44.972 20:24:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.972 20:24:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.972 20:24:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:44.972 20:24:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:44.972 20:24:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.972 20:24:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.972 20:24:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.972 20:24:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.972 20:24:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:44.972 20:24:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.972 20:24:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.972 20:24:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.972 20:24:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:44.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:29:44.972 00:29:44.972 --- 10.0.0.2 ping statistics --- 00:29:44.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.972 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:29:44.972 20:24:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:29:44.972 00:29:44.972 --- 10.0.0.1 ping statistics --- 00:29:44.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.972 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:29:44.972 20:24:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.972 20:24:42 -- nvmf/common.sh@410 -- # return 0 00:29:44.972 20:24:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:44.972 20:24:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.972 20:24:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:44.972 20:24:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:44.972 20:24:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.972 20:24:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:44.972 20:24:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:44.972 20:24:42 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:44.972 20:24:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:44.972 20:24:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:44.972 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:44.972 20:24:42 -- nvmf/common.sh@469 -- # nvmfpid=1698050 00:29:44.972 20:24:42 -- nvmf/common.sh@470 -- # waitforlisten 1698050 00:29:44.972 20:24:42 -- common/autotest_common.sh@819 -- # '[' -z 1698050 ']' 00:29:44.972 20:24:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.972 20:24:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:44.972 20:24:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.972 20:24:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:44.972 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:44.972 20:24:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:44.972 [2024-04-25 20:24:42.162544] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:44.972 [2024-04-25 20:24:42.162667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.972 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.972 [2024-04-25 20:24:42.297373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.972 [2024-04-25 20:24:42.401814] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:44.972 [2024-04-25 20:24:42.401979] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.972 [2024-04-25 20:24:42.401992] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.972 [2024-04-25 20:24:42.402001] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.972 [2024-04-25 20:24:42.402025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.972 20:24:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:44.972 20:24:42 -- common/autotest_common.sh@852 -- # return 0 00:29:44.972 20:24:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:44.972 20:24:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:44.972 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:45.233 20:24:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.233 20:24:42 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:45.233 20:24:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.233 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:45.233 [2024-04-25 20:24:42.918739] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.233 20:24:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.233 20:24:42 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:45.233 20:24:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.233 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:45.233 null0 00:29:45.233 20:24:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.233 20:24:42 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:45.233 20:24:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.233 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:45.233 20:24:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.233 20:24:42 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:45.233 20:24:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.233 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:45.233 20:24:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.233 20:24:42 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 00704d6906e944c4a16b81d1f8ac7d53 00:29:45.233 20:24:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.233 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:45.233 20:24:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.233 20:24:42 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:45.233 20:24:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.233 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:45.233 [2024-04-25 20:24:42.958860] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.234 20:24:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.234 20:24:42 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:45.234 20:24:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.234 20:24:42 -- common/autotest_common.sh@10 -- # set +x 00:29:45.491 nvme0n1 00:29:45.491 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.491 20:24:43 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:45.491 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.491 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.491 [ 00:29:45.491 { 00:29:45.491 "name": "nvme0n1", 00:29:45.491 "aliases": [ 00:29:45.491 "00704d69-06e9-44c4-a16b-81d1f8ac7d53" 00:29:45.491 ], 00:29:45.491 "product_name": "NVMe disk", 00:29:45.491 "block_size": 512, 00:29:45.491 "num_blocks": 2097152, 00:29:45.491 "uuid": "00704d69-06e9-44c4-a16b-81d1f8ac7d53", 00:29:45.491 "assigned_rate_limits": { 00:29:45.491 "rw_ios_per_sec": 0, 00:29:45.491 "rw_mbytes_per_sec": 0, 00:29:45.491 "r_mbytes_per_sec": 0, 00:29:45.491 "w_mbytes_per_sec": 0 00:29:45.491 }, 00:29:45.491 "claimed": false, 00:29:45.491 "zoned": false, 00:29:45.491 "supported_io_types": { 00:29:45.491 "read": true, 00:29:45.491 "write": true, 00:29:45.491 "unmap": false, 00:29:45.491 "write_zeroes": true, 00:29:45.491 "flush": true, 00:29:45.491 "reset": true, 00:29:45.491 "compare": true, 00:29:45.491 "compare_and_write": true, 00:29:45.491 "abort": true, 00:29:45.491 "nvme_admin": true, 00:29:45.491 "nvme_io": true 00:29:45.491 }, 00:29:45.491 "driver_specific": { 00:29:45.491 "nvme": [ 00:29:45.491 { 00:29:45.491 "trid": { 00:29:45.491 "trtype": "TCP", 00:29:45.492 "adrfam": "IPv4", 00:29:45.492 "traddr": "10.0.0.2", 00:29:45.492 "trsvcid": "4420", 00:29:45.492 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:45.492 }, 00:29:45.492 "ctrlr_data": { 00:29:45.492 "cntlid": 1, 00:29:45.492 "vendor_id": "0x8086", 00:29:45.492 "model_number": "SPDK bdev Controller", 00:29:45.492 "serial_number": "00000000000000000000", 00:29:45.492 "firmware_revision": "24.01.1", 00:29:45.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.492 "oacs": { 00:29:45.492 "security": 0, 00:29:45.492 "format": 0, 00:29:45.492 "firmware": 0, 00:29:45.492 "ns_manage": 0 00:29:45.492 }, 00:29:45.492 "multi_ctrlr": true, 00:29:45.492 "ana_reporting": false 00:29:45.492 }, 00:29:45.492 "vs": { 00:29:45.492 "nvme_version": "1.3" 00:29:45.492 }, 00:29:45.492 "ns_data": { 00:29:45.492 "id": 1, 00:29:45.492 "can_share": true 00:29:45.492 } 00:29:45.492 } 00:29:45.492 ], 00:29:45.492 "mp_policy": "active_passive" 00:29:45.492 } 00:29:45.492 } 00:29:45.492 ] 00:29:45.492 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.492 20:24:43 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:45.492 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.492 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.492 [2024-04-25 20:24:43.207951] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:45.492 [2024-04-25 20:24:43.208037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003bc0 (9): Bad file descriptor 00:29:45.492 [2024-04-25 20:24:43.339602] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:45.492 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.492 20:24:43 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:45.492 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.492 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.492 [ 00:29:45.492 { 00:29:45.492 "name": "nvme0n1", 00:29:45.492 "aliases": [ 00:29:45.492 "00704d69-06e9-44c4-a16b-81d1f8ac7d53" 00:29:45.492 ], 00:29:45.492 "product_name": "NVMe disk", 00:29:45.492 "block_size": 512, 00:29:45.492 "num_blocks": 2097152, 00:29:45.492 "uuid": "00704d69-06e9-44c4-a16b-81d1f8ac7d53", 00:29:45.492 "assigned_rate_limits": { 00:29:45.492 "rw_ios_per_sec": 0, 00:29:45.492 "rw_mbytes_per_sec": 0, 00:29:45.492 "r_mbytes_per_sec": 0, 00:29:45.492 "w_mbytes_per_sec": 0 00:29:45.492 }, 00:29:45.492 "claimed": false, 00:29:45.492 "zoned": false, 00:29:45.492 "supported_io_types": { 00:29:45.492 "read": true, 00:29:45.492 "write": true, 00:29:45.492 "unmap": false, 00:29:45.492 "write_zeroes": true, 00:29:45.492 "flush": true, 00:29:45.492 "reset": true, 00:29:45.492 "compare": true, 00:29:45.492 "compare_and_write": true, 00:29:45.492 "abort": true, 00:29:45.492 "nvme_admin": true, 00:29:45.492 "nvme_io": true 00:29:45.492 }, 00:29:45.492 "driver_specific": { 00:29:45.492 "nvme": [ 00:29:45.492 { 00:29:45.492 "trid": { 00:29:45.492 "trtype": "TCP", 00:29:45.492 "adrfam": "IPv4", 00:29:45.492 "traddr": "10.0.0.2", 00:29:45.492 "trsvcid": "4420", 00:29:45.492 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:45.492 }, 00:29:45.492 "ctrlr_data": { 00:29:45.492 "cntlid": 2, 00:29:45.492 "vendor_id": "0x8086", 00:29:45.492 "model_number": "SPDK bdev Controller", 00:29:45.492 "serial_number": "00000000000000000000", 00:29:45.492 "firmware_revision": "24.01.1", 00:29:45.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.492 "oacs": { 00:29:45.492 "security": 0, 00:29:45.492 "format": 0, 00:29:45.492 "firmware": 0, 00:29:45.492 "ns_manage": 0 00:29:45.492 }, 00:29:45.492 "multi_ctrlr": true, 00:29:45.492 "ana_reporting": false 00:29:45.492 }, 00:29:45.492 "vs": { 00:29:45.492 "nvme_version": "1.3" 00:29:45.492 }, 00:29:45.492 "ns_data": { 00:29:45.492 "id": 1, 00:29:45.492 "can_share": true 00:29:45.492 } 00:29:45.492 } 00:29:45.492 ], 00:29:45.492 "mp_policy": "active_passive" 00:29:45.492 } 00:29:45.492 } 00:29:45.492 ] 00:29:45.492 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.492 20:24:43 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.492 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.492 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.492 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.492 20:24:43 -- host/async_init.sh@53 -- # mktemp 00:29:45.492 20:24:43 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5tSMlS7Mhg 00:29:45.492 20:24:43 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:45.492 20:24:43 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5tSMlS7Mhg 00:29:45.492 20:24:43 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:45.492 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.492 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.492 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.492 20:24:43 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:45.492 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.492 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.492 [2024-04-25 20:24:43.392088] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:45.492 [2024-04-25 20:24:43.392223] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:45.492 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.492 20:24:43 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5tSMlS7Mhg 00:29:45.492 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.492 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.492 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.492 20:24:43 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5tSMlS7Mhg 00:29:45.492 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.492 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.492 [2024-04-25 20:24:43.408066] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:45.749 nvme0n1 00:29:45.749 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.749 20:24:43 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:45.749 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.749 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.749 [ 00:29:45.749 { 00:29:45.749 "name": "nvme0n1", 00:29:45.749 "aliases": [ 00:29:45.749 "00704d69-06e9-44c4-a16b-81d1f8ac7d53" 00:29:45.749 ], 00:29:45.749 "product_name": "NVMe disk", 00:29:45.749 "block_size": 512, 00:29:45.749 "num_blocks": 2097152, 00:29:45.749 "uuid": "00704d69-06e9-44c4-a16b-81d1f8ac7d53", 00:29:45.749 "assigned_rate_limits": { 00:29:45.749 "rw_ios_per_sec": 0, 00:29:45.749 "rw_mbytes_per_sec": 0, 00:29:45.749 "r_mbytes_per_sec": 0, 00:29:45.749 "w_mbytes_per_sec": 0 00:29:45.749 }, 00:29:45.749 "claimed": false, 00:29:45.749 "zoned": false, 00:29:45.749 "supported_io_types": { 00:29:45.749 "read": true, 00:29:45.749 "write": true, 00:29:45.749 "unmap": false, 00:29:45.749 "write_zeroes": true, 00:29:45.749 "flush": true, 00:29:45.749 "reset": true, 00:29:45.749 "compare": true, 00:29:45.749 "compare_and_write": true, 00:29:45.749 "abort": true, 00:29:45.749 "nvme_admin": true, 00:29:45.749 "nvme_io": true 00:29:45.749 }, 00:29:45.749 "driver_specific": { 00:29:45.749 "nvme": [ 00:29:45.749 { 00:29:45.749 "trid": { 00:29:45.749 "trtype": "TCP", 00:29:45.749 "adrfam": "IPv4", 00:29:45.749 "traddr": "10.0.0.2", 00:29:45.749 "trsvcid": "4421", 00:29:45.749 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:45.749 }, 00:29:45.749 "ctrlr_data": { 00:29:45.749 "cntlid": 3, 00:29:45.749 "vendor_id": "0x8086", 00:29:45.749 "model_number": "SPDK bdev Controller", 00:29:45.749 "serial_number": "00000000000000000000", 00:29:45.749 "firmware_revision": "24.01.1", 00:29:45.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.749 "oacs": { 00:29:45.749 "security": 0, 00:29:45.749 "format": 0, 00:29:45.749 "firmware": 0, 00:29:45.749 "ns_manage": 0 00:29:45.749 }, 00:29:45.749 "multi_ctrlr": true, 00:29:45.749 "ana_reporting": false 00:29:45.749 }, 00:29:45.749 "vs": { 00:29:45.749 "nvme_version": "1.3" 00:29:45.749 }, 00:29:45.749 "ns_data": { 00:29:45.749 "id": 1, 00:29:45.749 "can_share": true 00:29:45.749 } 00:29:45.749 } 00:29:45.749 ], 00:29:45.749 "mp_policy": "active_passive" 00:29:45.749 } 00:29:45.749 } 00:29:45.749 ] 00:29:45.749 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.749 20:24:43 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.749 20:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.749 20:24:43 -- common/autotest_common.sh@10 -- # set +x 00:29:45.749 20:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.749 20:24:43 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.5tSMlS7Mhg 00:29:45.749 20:24:43 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:45.749 20:24:43 -- host/async_init.sh@78 -- # nvmftestfini 00:29:45.749 20:24:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:45.749 20:24:43 -- nvmf/common.sh@116 -- # sync 00:29:45.749 20:24:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:45.749 20:24:43 -- nvmf/common.sh@119 -- # set +e 00:29:45.749 20:24:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:45.749 20:24:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:45.749 rmmod nvme_tcp 00:29:45.749 rmmod nvme_fabrics 00:29:45.749 rmmod nvme_keyring 00:29:45.749 20:24:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:45.749 20:24:43 -- nvmf/common.sh@123 -- # set -e 00:29:45.749 20:24:43 -- nvmf/common.sh@124 -- # return 0 00:29:45.749 20:24:43 -- nvmf/common.sh@477 -- # '[' -n 1698050 ']' 00:29:45.749 20:24:43 -- nvmf/common.sh@478 -- # killprocess 1698050 00:29:45.749 20:24:43 -- common/autotest_common.sh@926 -- # '[' -z 1698050 ']' 00:29:45.749 20:24:43 -- common/autotest_common.sh@930 -- # kill -0 1698050 00:29:45.749 20:24:43 -- common/autotest_common.sh@931 -- # uname 00:29:45.749 20:24:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:45.749 20:24:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1698050 00:29:45.749 20:24:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:45.749 20:24:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:45.749 20:24:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1698050' 00:29:45.749 killing process with pid 1698050 00:29:45.749 20:24:43 -- common/autotest_common.sh@945 -- # kill 1698050 00:29:45.750 20:24:43 -- common/autotest_common.sh@950 -- # wait 1698050 00:29:46.315 20:24:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:46.315 20:24:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:46.315 20:24:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:46.315 20:24:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:46.315 20:24:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:46.315 20:24:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.315 20:24:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.315 20:24:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.224 20:24:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:48.224 00:29:48.224 real 0m10.281s 00:29:48.224 user 0m3.635s 00:29:48.224 sys 0m4.991s 00:29:48.224 20:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.224 20:24:46 -- common/autotest_common.sh@10 -- # set +x 00:29:48.224 ************************************ 00:29:48.224 END TEST nvmf_async_init 00:29:48.224 ************************************ 00:29:48.486 20:24:46 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:48.486 20:24:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:48.486 20:24:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:48.486 20:24:46 -- common/autotest_common.sh@10 -- # set +x 00:29:48.486 ************************************ 00:29:48.486 START TEST dma 00:29:48.486 ************************************ 00:29:48.486 20:24:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:48.486 * Looking for test storage... 00:29:48.486 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:48.486 20:24:46 -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.486 20:24:46 -- nvmf/common.sh@7 -- # uname -s 00:29:48.486 20:24:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.486 20:24:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.486 20:24:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.486 20:24:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.486 20:24:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.486 20:24:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.486 20:24:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.486 20:24:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.486 20:24:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.486 20:24:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.486 20:24:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:48.486 20:24:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:48.486 20:24:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.486 20:24:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.486 20:24:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:48.486 20:24:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:48.486 20:24:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.486 20:24:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.486 20:24:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.486 20:24:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.486 20:24:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.486 20:24:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.486 20:24:46 -- paths/export.sh@5 -- # export PATH 00:29:48.486 20:24:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.486 20:24:46 -- nvmf/common.sh@46 -- # : 0 00:29:48.486 20:24:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:48.486 20:24:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:48.486 20:24:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:48.486 20:24:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.486 20:24:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.486 20:24:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:48.486 20:24:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:48.486 20:24:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:48.486 20:24:46 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:48.486 20:24:46 -- host/dma.sh@13 -- # exit 0 00:29:48.486 00:29:48.486 real 0m0.101s 00:29:48.486 user 0m0.032s 00:29:48.486 sys 0m0.077s 00:29:48.486 20:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.486 20:24:46 -- common/autotest_common.sh@10 -- # set +x 00:29:48.486 ************************************ 00:29:48.486 END TEST dma 00:29:48.486 ************************************ 00:29:48.486 20:24:46 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:48.486 20:24:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:48.486 20:24:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:48.486 20:24:46 -- common/autotest_common.sh@10 -- # set +x 00:29:48.486 ************************************ 00:29:48.486 START TEST nvmf_identify 00:29:48.486 ************************************ 00:29:48.486 20:24:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:48.486 * Looking for test storage... 00:29:48.487 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:48.487 20:24:46 -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.487 20:24:46 -- nvmf/common.sh@7 -- # uname -s 00:29:48.487 20:24:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.487 20:24:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.487 20:24:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.487 20:24:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.487 20:24:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.487 20:24:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.487 20:24:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.487 20:24:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.487 20:24:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.487 20:24:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.487 20:24:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:48.487 20:24:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:48.487 20:24:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.487 20:24:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.487 20:24:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:48.487 20:24:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:48.487 20:24:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.487 20:24:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.487 20:24:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.487 20:24:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.487 20:24:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.487 20:24:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.487 20:24:46 -- paths/export.sh@5 -- # export PATH 00:29:48.487 20:24:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.487 20:24:46 -- nvmf/common.sh@46 -- # : 0 00:29:48.487 20:24:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:48.487 20:24:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:48.487 20:24:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:48.487 20:24:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.487 20:24:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.487 20:24:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:48.487 20:24:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:48.487 20:24:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:48.487 20:24:46 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:48.487 20:24:46 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:48.487 20:24:46 -- host/identify.sh@14 -- # nvmftestinit 00:29:48.487 20:24:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:48.487 20:24:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.487 20:24:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:48.487 20:24:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:48.487 20:24:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:48.487 20:24:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.487 20:24:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.487 20:24:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.487 20:24:46 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:48.487 20:24:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:48.748 20:24:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:48.748 20:24:46 -- common/autotest_common.sh@10 -- # set +x 00:29:54.035 20:24:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:54.035 20:24:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:54.035 20:24:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:54.035 20:24:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:54.035 20:24:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:54.035 20:24:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:54.035 20:24:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:54.035 20:24:51 -- nvmf/common.sh@294 -- # net_devs=() 00:29:54.035 20:24:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:54.035 20:24:51 -- nvmf/common.sh@295 -- # e810=() 00:29:54.035 20:24:51 -- nvmf/common.sh@295 -- # local -ga e810 00:29:54.035 20:24:51 -- nvmf/common.sh@296 -- # x722=() 00:29:54.035 20:24:51 -- nvmf/common.sh@296 -- # local -ga x722 00:29:54.035 20:24:51 -- nvmf/common.sh@297 -- # mlx=() 00:29:54.035 20:24:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:54.035 20:24:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.035 20:24:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:54.035 20:24:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:54.035 20:24:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:54.035 20:24:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:54.035 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:54.035 20:24:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:54.035 20:24:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:54.035 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:54.035 20:24:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:54.035 20:24:51 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:54.035 20:24:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.035 20:24:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:54.035 20:24:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.035 20:24:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:54.035 Found net devices under 0000:27:00.0: cvl_0_0 00:29:54.035 20:24:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.035 20:24:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:54.035 20:24:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.035 20:24:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:54.035 20:24:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.035 20:24:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:54.035 Found net devices under 0000:27:00.1: cvl_0_1 00:29:54.035 20:24:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.035 20:24:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:54.035 20:24:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:54.035 20:24:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:54.035 20:24:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.035 20:24:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.035 20:24:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.035 20:24:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:54.035 20:24:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.035 20:24:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.035 20:24:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:54.035 20:24:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.035 20:24:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.035 20:24:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:54.035 20:24:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:54.035 20:24:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.035 20:24:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.035 20:24:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.035 20:24:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.035 20:24:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:54.035 20:24:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.035 20:24:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.035 20:24:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.035 20:24:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:54.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:29:54.035 00:29:54.035 --- 10.0.0.2 ping statistics --- 00:29:54.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.035 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:29:54.035 20:24:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:29:54.035 00:29:54.035 --- 10.0.0.1 ping statistics --- 00:29:54.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.035 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:29:54.035 20:24:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.035 20:24:51 -- nvmf/common.sh@410 -- # return 0 00:29:54.035 20:24:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:54.035 20:24:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.035 20:24:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:54.035 20:24:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.035 20:24:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:54.035 20:24:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:54.035 20:24:51 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:54.035 20:24:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:54.035 20:24:51 -- common/autotest_common.sh@10 -- # set +x 00:29:54.035 20:24:51 -- host/identify.sh@19 -- # nvmfpid=1702306 00:29:54.035 20:24:51 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.035 20:24:51 -- host/identify.sh@23 -- # waitforlisten 1702306 00:29:54.035 20:24:51 -- common/autotest_common.sh@819 -- # '[' -z 1702306 ']' 00:29:54.035 20:24:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.035 20:24:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:54.035 20:24:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.035 20:24:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:54.035 20:24:51 -- common/autotest_common.sh@10 -- # set +x 00:29:54.035 20:24:51 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:54.293 [2024-04-25 20:24:52.020553] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:54.293 [2024-04-25 20:24:52.020659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.294 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.294 [2024-04-25 20:24:52.143035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.551 [2024-04-25 20:24:52.240084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:54.551 [2024-04-25 20:24:52.240259] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.551 [2024-04-25 20:24:52.240273] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.551 [2024-04-25 20:24:52.240283] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.551 [2024-04-25 20:24:52.240362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.551 [2024-04-25 20:24:52.240416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.551 [2024-04-25 20:24:52.240432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.551 [2024-04-25 20:24:52.240447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.809 20:24:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:54.809 20:24:52 -- common/autotest_common.sh@852 -- # return 0 00:29:54.809 20:24:52 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:54.809 20:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:54.809 20:24:52 -- common/autotest_common.sh@10 -- # set +x 00:29:54.809 [2024-04-25 20:24:52.718539] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.809 20:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:54.809 20:24:52 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:54.809 20:24:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:54.809 20:24:52 -- common/autotest_common.sh@10 -- # set +x 00:29:55.069 20:24:52 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:55.069 20:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.069 20:24:52 -- common/autotest_common.sh@10 -- # set +x 00:29:55.069 Malloc0 00:29:55.069 20:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.069 20:24:52 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:55.070 20:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.070 20:24:52 -- common/autotest_common.sh@10 -- # set +x 00:29:55.070 20:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.070 20:24:52 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:55.070 20:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.070 20:24:52 -- common/autotest_common.sh@10 -- # set +x 00:29:55.070 20:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.070 20:24:52 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.070 20:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.070 20:24:52 -- common/autotest_common.sh@10 -- # set +x 00:29:55.070 [2024-04-25 20:24:52.814762] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.070 20:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.070 20:24:52 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:55.070 20:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.070 20:24:52 -- common/autotest_common.sh@10 -- # set +x 00:29:55.070 20:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.070 20:24:52 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:55.070 20:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.070 20:24:52 -- common/autotest_common.sh@10 -- # set +x 00:29:55.070 [2024-04-25 20:24:52.830547] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:55.070 [ 00:29:55.070 { 00:29:55.070 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:55.070 "subtype": "Discovery", 00:29:55.070 "listen_addresses": [ 00:29:55.070 { 00:29:55.070 "transport": "TCP", 00:29:55.070 "trtype": "TCP", 00:29:55.070 "adrfam": "IPv4", 00:29:55.070 "traddr": "10.0.0.2", 00:29:55.070 "trsvcid": "4420" 00:29:55.070 } 00:29:55.070 ], 00:29:55.070 "allow_any_host": true, 00:29:55.070 "hosts": [] 00:29:55.070 }, 00:29:55.070 { 00:29:55.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:55.070 "subtype": "NVMe", 00:29:55.070 "listen_addresses": [ 00:29:55.070 { 00:29:55.070 "transport": "TCP", 00:29:55.070 "trtype": "TCP", 00:29:55.070 "adrfam": "IPv4", 00:29:55.070 "traddr": "10.0.0.2", 00:29:55.070 "trsvcid": "4420" 00:29:55.070 } 00:29:55.070 ], 00:29:55.070 "allow_any_host": true, 00:29:55.070 "hosts": [], 00:29:55.070 "serial_number": "SPDK00000000000001", 00:29:55.070 "model_number": "SPDK bdev Controller", 00:29:55.070 "max_namespaces": 32, 00:29:55.070 "min_cntlid": 1, 00:29:55.070 "max_cntlid": 65519, 00:29:55.070 "namespaces": [ 00:29:55.070 { 00:29:55.070 "nsid": 1, 00:29:55.070 "bdev_name": "Malloc0", 00:29:55.070 "name": "Malloc0", 00:29:55.070 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:55.070 "eui64": "ABCDEF0123456789", 00:29:55.070 "uuid": "8d755c75-72f2-42a4-b0aa-15856a538e5a" 00:29:55.070 } 00:29:55.070 ] 00:29:55.070 } 00:29:55.070 ] 00:29:55.070 20:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.070 20:24:52 -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:55.070 [2024-04-25 20:24:52.875108] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:55.070 [2024-04-25 20:24:52.875187] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702617 ] 00:29:55.070 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.070 [2024-04-25 20:24:52.922607] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:55.070 [2024-04-25 20:24:52.922688] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:55.070 [2024-04-25 20:24:52.922700] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:55.070 [2024-04-25 20:24:52.922716] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:55.070 [2024-04-25 20:24:52.922728] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:55.070 [2024-04-25 20:24:52.926523] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:55.070 [2024-04-25 20:24:52.926561] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x613000001fc0 0 00:29:55.070 [2024-04-25 20:24:52.934503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:55.070 [2024-04-25 20:24:52.934520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:55.070 [2024-04-25 20:24:52.934527] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:55.070 [2024-04-25 20:24:52.934532] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:55.070 [2024-04-25 20:24:52.934576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.934583] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.934591] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.070 [2024-04-25 20:24:52.934612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:55.070 [2024-04-25 20:24:52.934635] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.070 [2024-04-25 20:24:52.942510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.070 [2024-04-25 20:24:52.942523] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.070 [2024-04-25 20:24:52.942528] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.942535] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.070 [2024-04-25 20:24:52.942552] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:55.070 [2024-04-25 20:24:52.942564] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:55.070 [2024-04-25 20:24:52.942572] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:55.070 [2024-04-25 20:24:52.942591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.942597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.942603] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.070 [2024-04-25 20:24:52.942619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.070 [2024-04-25 20:24:52.942636] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.070 [2024-04-25 20:24:52.942802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.070 [2024-04-25 20:24:52.942809] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.070 [2024-04-25 20:24:52.942819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.942825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.070 [2024-04-25 20:24:52.942834] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:55.070 [2024-04-25 20:24:52.942842] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:55.070 [2024-04-25 20:24:52.942850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.942855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.942862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.070 [2024-04-25 20:24:52.942873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.070 [2024-04-25 20:24:52.942884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.070 [2024-04-25 20:24:52.943018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.070 [2024-04-25 20:24:52.943026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.070 [2024-04-25 20:24:52.943033] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.943037] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.070 [2024-04-25 20:24:52.943044] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:55.070 [2024-04-25 20:24:52.943053] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:55.070 [2024-04-25 20:24:52.943060] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.943065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.943070] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.070 [2024-04-25 20:24:52.943080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.070 [2024-04-25 20:24:52.943092] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.070 [2024-04-25 20:24:52.943234] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.070 [2024-04-25 20:24:52.943240] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.070 [2024-04-25 20:24:52.943244] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.943249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.070 [2024-04-25 20:24:52.943255] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:55.070 [2024-04-25 20:24:52.943264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.943269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.070 [2024-04-25 20:24:52.943274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.070 [2024-04-25 20:24:52.943286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.070 [2024-04-25 20:24:52.943296] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.070 [2024-04-25 20:24:52.943441] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.071 [2024-04-25 20:24:52.943448] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.071 [2024-04-25 20:24:52.943452] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.943456] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.071 [2024-04-25 20:24:52.943463] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:55.071 [2024-04-25 20:24:52.943470] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:55.071 [2024-04-25 20:24:52.943478] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:55.071 [2024-04-25 20:24:52.943585] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:55.071 [2024-04-25 20:24:52.943593] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:55.071 [2024-04-25 20:24:52.943603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.943608] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.943616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.071 [2024-04-25 20:24:52.943624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.071 [2024-04-25 20:24:52.943636] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.071 [2024-04-25 20:24:52.943771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.071 [2024-04-25 20:24:52.943778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.071 [2024-04-25 20:24:52.943782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.943786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.071 [2024-04-25 20:24:52.943792] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:55.071 [2024-04-25 20:24:52.943803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.943807] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.943814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.071 [2024-04-25 20:24:52.943823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.071 [2024-04-25 20:24:52.943833] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.071 [2024-04-25 20:24:52.943971] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.071 [2024-04-25 20:24:52.943978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.071 [2024-04-25 20:24:52.943982] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.943987] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.071 [2024-04-25 20:24:52.943993] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:55.071 [2024-04-25 20:24:52.943998] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:55.071 [2024-04-25 20:24:52.944006] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:55.071 [2024-04-25 20:24:52.944014] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:55.071 [2024-04-25 20:24:52.944026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.944031] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.944040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.071 [2024-04-25 20:24:52.944050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.071 [2024-04-25 20:24:52.944061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.071 [2024-04-25 20:24:52.944267] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.071 [2024-04-25 20:24:52.944275] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.071 [2024-04-25 20:24:52.944279] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.944285] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=0 00:29:55.071 [2024-04-25 20:24:52.944291] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:55.071 [2024-04-25 20:24:52.944366] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.944371] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.984859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.071 [2024-04-25 20:24:52.984873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.071 [2024-04-25 20:24:52.984881] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.984886] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.071 [2024-04-25 20:24:52.984901] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:55.071 [2024-04-25 20:24:52.984909] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:55.071 [2024-04-25 20:24:52.984914] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:55.071 [2024-04-25 20:24:52.984921] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:55.071 [2024-04-25 20:24:52.984931] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:55.071 [2024-04-25 20:24:52.984938] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:55.071 [2024-04-25 20:24:52.984947] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:55.071 [2024-04-25 20:24:52.984958] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.984963] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.984969] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.071 [2024-04-25 20:24:52.984980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:55.071 [2024-04-25 20:24:52.984995] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.071 [2024-04-25 20:24:52.985164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.071 [2024-04-25 20:24:52.985170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.071 [2024-04-25 20:24:52.985174] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985178] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.071 [2024-04-25 20:24:52.985191] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985197] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985203] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.071 [2024-04-25 20:24:52.985215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.071 [2024-04-25 20:24:52.985222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985226] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985230] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x613000001fc0) 00:29:55.071 [2024-04-25 20:24:52.985239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.071 [2024-04-25 20:24:52.985245] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985248] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985253] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x613000001fc0) 00:29:55.071 [2024-04-25 20:24:52.985260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.071 [2024-04-25 20:24:52.985265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.071 [2024-04-25 20:24:52.985282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.071 [2024-04-25 20:24:52.985287] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:55.071 [2024-04-25 20:24:52.985296] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:55.071 [2024-04-25 20:24:52.985304] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985309] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985314] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.071 [2024-04-25 20:24:52.985323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.071 [2024-04-25 20:24:52.985335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.071 [2024-04-25 20:24:52.985340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:29:55.071 [2024-04-25 20:24:52.985345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:29:55.071 [2024-04-25 20:24:52.985350] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.071 [2024-04-25 20:24:52.985355] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.071 [2024-04-25 20:24:52.985531] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.071 [2024-04-25 20:24:52.985537] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.071 [2024-04-25 20:24:52.985541] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.071 [2024-04-25 20:24:52.985545] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.071 [2024-04-25 20:24:52.985552] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:55.072 [2024-04-25 20:24:52.985561] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:55.072 [2024-04-25 20:24:52.985576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.985584] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.985590] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.072 [2024-04-25 20:24:52.985598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.072 [2024-04-25 20:24:52.985609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.072 [2024-04-25 20:24:52.985761] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.072 [2024-04-25 20:24:52.985768] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.072 [2024-04-25 20:24:52.985773] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.985778] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:55.072 [2024-04-25 20:24:52.985786] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:55.072 [2024-04-25 20:24:52.985894] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.985899] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986013] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.072 [2024-04-25 20:24:52.986020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.072 [2024-04-25 20:24:52.986024] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.072 [2024-04-25 20:24:52.986045] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:55.072 [2024-04-25 20:24:52.986078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986088] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.072 [2024-04-25 20:24:52.986098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.072 [2024-04-25 20:24:52.986106] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986111] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:55.072 [2024-04-25 20:24:52.986124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.072 [2024-04-25 20:24:52.986136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.072 [2024-04-25 20:24:52.986142] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:55.072 [2024-04-25 20:24:52.986365] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.072 [2024-04-25 20:24:52.986372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.072 [2024-04-25 20:24:52.986377] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986382] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=1024, cccid=4 00:29:55.072 [2024-04-25 20:24:52.986388] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=1024 00:29:55.072 [2024-04-25 20:24:52.986396] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986403] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.072 [2024-04-25 20:24:52.986417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.072 [2024-04-25 20:24:52.986420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.072 [2024-04-25 20:24:52.986425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:55.335 [2024-04-25 20:24:53.030502] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.335 [2024-04-25 20:24:53.030517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.335 [2024-04-25 20:24:53.030521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.030527] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.335 [2024-04-25 20:24:53.030549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.030554] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.030560] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.335 [2024-04-25 20:24:53.030571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.335 [2024-04-25 20:24:53.030588] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.335 [2024-04-25 20:24:53.030778] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.335 [2024-04-25 20:24:53.030784] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.335 [2024-04-25 20:24:53.030788] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.030796] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=3072, cccid=4 00:29:55.335 [2024-04-25 20:24:53.030802] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=3072 00:29:55.335 [2024-04-25 20:24:53.030888] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.030893] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.071827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.335 [2024-04-25 20:24:53.071840] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.335 [2024-04-25 20:24:53.071844] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.071849] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.335 [2024-04-25 20:24:53.071864] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.071869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.071877] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.335 [2024-04-25 20:24:53.071887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.335 [2024-04-25 20:24:53.071901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.335 [2024-04-25 20:24:53.072076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.335 [2024-04-25 20:24:53.072082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.335 [2024-04-25 20:24:53.072086] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.072091] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=8, cccid=4 00:29:55.335 [2024-04-25 20:24:53.072096] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=8 00:29:55.335 [2024-04-25 20:24:53.072104] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.072108] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.112831] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.335 [2024-04-25 20:24:53.112844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.335 [2024-04-25 20:24:53.112849] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.335 [2024-04-25 20:24:53.112854] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.335 ===================================================== 00:29:55.335 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:55.335 ===================================================== 00:29:55.335 Controller Capabilities/Features 00:29:55.335 ================================ 00:29:55.335 Vendor ID: 0000 00:29:55.335 Subsystem Vendor ID: 0000 00:29:55.335 Serial Number: .................... 00:29:55.335 Model Number: ........................................ 00:29:55.335 Firmware Version: 24.01.1 00:29:55.335 Recommended Arb Burst: 0 00:29:55.335 IEEE OUI Identifier: 00 00 00 00:29:55.335 Multi-path I/O 00:29:55.335 May have multiple subsystem ports: No 00:29:55.335 May have multiple controllers: No 00:29:55.335 Associated with SR-IOV VF: No 00:29:55.335 Max Data Transfer Size: 131072 00:29:55.335 Max Number of Namespaces: 0 00:29:55.335 Max Number of I/O Queues: 1024 00:29:55.335 NVMe Specification Version (VS): 1.3 00:29:55.335 NVMe Specification Version (Identify): 1.3 00:29:55.335 Maximum Queue Entries: 128 00:29:55.335 Contiguous Queues Required: Yes 00:29:55.335 Arbitration Mechanisms Supported 00:29:55.335 Weighted Round Robin: Not Supported 00:29:55.335 Vendor Specific: Not Supported 00:29:55.335 Reset Timeout: 15000 ms 00:29:55.335 Doorbell Stride: 4 bytes 00:29:55.335 NVM Subsystem Reset: Not Supported 00:29:55.335 Command Sets Supported 00:29:55.335 NVM Command Set: Supported 00:29:55.335 Boot Partition: Not Supported 00:29:55.335 Memory Page Size Minimum: 4096 bytes 00:29:55.335 Memory Page Size Maximum: 4096 bytes 00:29:55.335 Persistent Memory Region: Not Supported 00:29:55.335 Optional Asynchronous Events Supported 00:29:55.335 Namespace Attribute Notices: Not Supported 00:29:55.335 Firmware Activation Notices: Not Supported 00:29:55.335 ANA Change Notices: Not Supported 00:29:55.335 PLE Aggregate Log Change Notices: Not Supported 00:29:55.335 LBA Status Info Alert Notices: Not Supported 00:29:55.335 EGE Aggregate Log Change Notices: Not Supported 00:29:55.335 Normal NVM Subsystem Shutdown event: Not Supported 00:29:55.335 Zone Descriptor Change Notices: Not Supported 00:29:55.335 Discovery Log Change Notices: Supported 00:29:55.335 Controller Attributes 00:29:55.335 128-bit Host Identifier: Not Supported 00:29:55.335 Non-Operational Permissive Mode: Not Supported 00:29:55.335 NVM Sets: Not Supported 00:29:55.335 Read Recovery Levels: Not Supported 00:29:55.335 Endurance Groups: Not Supported 00:29:55.335 Predictable Latency Mode: Not Supported 00:29:55.335 Traffic Based Keep ALive: Not Supported 00:29:55.335 Namespace Granularity: Not Supported 00:29:55.335 SQ Associations: Not Supported 00:29:55.335 UUID List: Not Supported 00:29:55.335 Multi-Domain Subsystem: Not Supported 00:29:55.335 Fixed Capacity Management: Not Supported 00:29:55.335 Variable Capacity Management: Not Supported 00:29:55.335 Delete Endurance Group: Not Supported 00:29:55.335 Delete NVM Set: Not Supported 00:29:55.335 Extended LBA Formats Supported: Not Supported 00:29:55.335 Flexible Data Placement Supported: Not Supported 00:29:55.335 00:29:55.335 Controller Memory Buffer Support 00:29:55.335 ================================ 00:29:55.335 Supported: No 00:29:55.335 00:29:55.335 Persistent Memory Region Support 00:29:55.335 ================================ 00:29:55.335 Supported: No 00:29:55.335 00:29:55.335 Admin Command Set Attributes 00:29:55.335 ============================ 00:29:55.335 Security Send/Receive: Not Supported 00:29:55.335 Format NVM: Not Supported 00:29:55.335 Firmware Activate/Download: Not Supported 00:29:55.335 Namespace Management: Not Supported 00:29:55.335 Device Self-Test: Not Supported 00:29:55.335 Directives: Not Supported 00:29:55.335 NVMe-MI: Not Supported 00:29:55.335 Virtualization Management: Not Supported 00:29:55.335 Doorbell Buffer Config: Not Supported 00:29:55.335 Get LBA Status Capability: Not Supported 00:29:55.335 Command & Feature Lockdown Capability: Not Supported 00:29:55.335 Abort Command Limit: 1 00:29:55.335 Async Event Request Limit: 4 00:29:55.335 Number of Firmware Slots: N/A 00:29:55.335 Firmware Slot 1 Read-Only: N/A 00:29:55.335 Firmware Activation Without Reset: N/A 00:29:55.335 Multiple Update Detection Support: N/A 00:29:55.335 Firmware Update Granularity: No Information Provided 00:29:55.335 Per-Namespace SMART Log: No 00:29:55.335 Asymmetric Namespace Access Log Page: Not Supported 00:29:55.335 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:55.335 Command Effects Log Page: Not Supported 00:29:55.335 Get Log Page Extended Data: Supported 00:29:55.335 Telemetry Log Pages: Not Supported 00:29:55.335 Persistent Event Log Pages: Not Supported 00:29:55.335 Supported Log Pages Log Page: May Support 00:29:55.335 Commands Supported & Effects Log Page: Not Supported 00:29:55.335 Feature Identifiers & Effects Log Page:May Support 00:29:55.335 NVMe-MI Commands & Effects Log Page: May Support 00:29:55.335 Data Area 4 for Telemetry Log: Not Supported 00:29:55.335 Error Log Page Entries Supported: 128 00:29:55.335 Keep Alive: Not Supported 00:29:55.335 00:29:55.335 NVM Command Set Attributes 00:29:55.335 ========================== 00:29:55.335 Submission Queue Entry Size 00:29:55.335 Max: 1 00:29:55.335 Min: 1 00:29:55.335 Completion Queue Entry Size 00:29:55.335 Max: 1 00:29:55.335 Min: 1 00:29:55.335 Number of Namespaces: 0 00:29:55.335 Compare Command: Not Supported 00:29:55.335 Write Uncorrectable Command: Not Supported 00:29:55.335 Dataset Management Command: Not Supported 00:29:55.335 Write Zeroes Command: Not Supported 00:29:55.336 Set Features Save Field: Not Supported 00:29:55.336 Reservations: Not Supported 00:29:55.336 Timestamp: Not Supported 00:29:55.336 Copy: Not Supported 00:29:55.336 Volatile Write Cache: Not Present 00:29:55.336 Atomic Write Unit (Normal): 1 00:29:55.336 Atomic Write Unit (PFail): 1 00:29:55.336 Atomic Compare & Write Unit: 1 00:29:55.336 Fused Compare & Write: Supported 00:29:55.336 Scatter-Gather List 00:29:55.336 SGL Command Set: Supported 00:29:55.336 SGL Keyed: Supported 00:29:55.336 SGL Bit Bucket Descriptor: Not Supported 00:29:55.336 SGL Metadata Pointer: Not Supported 00:29:55.336 Oversized SGL: Not Supported 00:29:55.336 SGL Metadata Address: Not Supported 00:29:55.336 SGL Offset: Supported 00:29:55.336 Transport SGL Data Block: Not Supported 00:29:55.336 Replay Protected Memory Block: Not Supported 00:29:55.336 00:29:55.336 Firmware Slot Information 00:29:55.336 ========================= 00:29:55.336 Active slot: 0 00:29:55.336 00:29:55.336 00:29:55.336 Error Log 00:29:55.336 ========= 00:29:55.336 00:29:55.336 Active Namespaces 00:29:55.336 ================= 00:29:55.336 Discovery Log Page 00:29:55.336 ================== 00:29:55.336 Generation Counter: 2 00:29:55.336 Number of Records: 2 00:29:55.336 Record Format: 0 00:29:55.336 00:29:55.336 Discovery Log Entry 0 00:29:55.336 ---------------------- 00:29:55.336 Transport Type: 3 (TCP) 00:29:55.336 Address Family: 1 (IPv4) 00:29:55.336 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:55.336 Entry Flags: 00:29:55.336 Duplicate Returned Information: 1 00:29:55.336 Explicit Persistent Connection Support for Discovery: 1 00:29:55.336 Transport Requirements: 00:29:55.336 Secure Channel: Not Required 00:29:55.336 Port ID: 0 (0x0000) 00:29:55.336 Controller ID: 65535 (0xffff) 00:29:55.336 Admin Max SQ Size: 128 00:29:55.336 Transport Service Identifier: 4420 00:29:55.336 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:55.336 Transport Address: 10.0.0.2 00:29:55.336 Discovery Log Entry 1 00:29:55.336 ---------------------- 00:29:55.336 Transport Type: 3 (TCP) 00:29:55.336 Address Family: 1 (IPv4) 00:29:55.336 Subsystem Type: 2 (NVM Subsystem) 00:29:55.336 Entry Flags: 00:29:55.336 Duplicate Returned Information: 0 00:29:55.336 Explicit Persistent Connection Support for Discovery: 0 00:29:55.336 Transport Requirements: 00:29:55.336 Secure Channel: Not Required 00:29:55.336 Port ID: 0 (0x0000) 00:29:55.336 Controller ID: 65535 (0xffff) 00:29:55.336 Admin Max SQ Size: 128 00:29:55.336 Transport Service Identifier: 4420 00:29:55.336 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:55.336 Transport Address: 10.0.0.2 [2024-04-25 20:24:53.112977] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:55.336 [2024-04-25 20:24:53.112992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.336 [2024-04-25 20:24:53.113000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.336 [2024-04-25 20:24:53.113006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.336 [2024-04-25 20:24:53.113012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.336 [2024-04-25 20:24:53.113025] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113036] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.336 [2024-04-25 20:24:53.113046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.336 [2024-04-25 20:24:53.113064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.336 [2024-04-25 20:24:53.113229] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.336 [2024-04-25 20:24:53.113236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.336 [2024-04-25 20:24:53.113241] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.336 [2024-04-25 20:24:53.113256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113260] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113266] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.336 [2024-04-25 20:24:53.113275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.336 [2024-04-25 20:24:53.113289] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.336 [2024-04-25 20:24:53.113445] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.336 [2024-04-25 20:24:53.113452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.336 [2024-04-25 20:24:53.113455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.336 [2024-04-25 20:24:53.113466] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:55.336 [2024-04-25 20:24:53.113472] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:55.336 [2024-04-25 20:24:53.113483] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113497] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.336 [2024-04-25 20:24:53.113506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.336 [2024-04-25 20:24:53.113516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.336 [2024-04-25 20:24:53.113659] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.336 [2024-04-25 20:24:53.113665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.336 [2024-04-25 20:24:53.113669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.336 [2024-04-25 20:24:53.113683] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113687] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113692] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.336 [2024-04-25 20:24:53.113701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.336 [2024-04-25 20:24:53.113711] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.336 [2024-04-25 20:24:53.113850] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.336 [2024-04-25 20:24:53.113856] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.336 [2024-04-25 20:24:53.113860] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.336 [2024-04-25 20:24:53.113874] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113878] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.113882] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.336 [2024-04-25 20:24:53.113891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.336 [2024-04-25 20:24:53.113900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.336 [2024-04-25 20:24:53.114035] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.336 [2024-04-25 20:24:53.114041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.336 [2024-04-25 20:24:53.114045] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.114049] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.336 [2024-04-25 20:24:53.114059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.114063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.114072] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.336 [2024-04-25 20:24:53.114080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.336 [2024-04-25 20:24:53.114089] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.336 [2024-04-25 20:24:53.114231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.336 [2024-04-25 20:24:53.114237] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.336 [2024-04-25 20:24:53.114241] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.114246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.336 [2024-04-25 20:24:53.114255] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.114259] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.336 [2024-04-25 20:24:53.114263] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.336 [2024-04-25 20:24:53.114271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.336 [2024-04-25 20:24:53.114280] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.336 [2024-04-25 20:24:53.114416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.337 [2024-04-25 20:24:53.114425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.337 [2024-04-25 20:24:53.114430] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.337 [2024-04-25 20:24:53.114434] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.337 [2024-04-25 20:24:53.114443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.337 [2024-04-25 20:24:53.114447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.337 [2024-04-25 20:24:53.114451] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.337 [2024-04-25 20:24:53.114459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.337 [2024-04-25 20:24:53.114468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.337 [2024-04-25 20:24:53.118499] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.337 [2024-04-25 20:24:53.118507] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.337 [2024-04-25 20:24:53.118512] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.337 [2024-04-25 20:24:53.118516] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.337 [2024-04-25 20:24:53.118526] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.337 [2024-04-25 20:24:53.118530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.337 [2024-04-25 20:24:53.118534] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.337 [2024-04-25 20:24:53.118544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.337 [2024-04-25 20:24:53.118554] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.337 [2024-04-25 20:24:53.118674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.337 [2024-04-25 20:24:53.118680] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.337 [2024-04-25 20:24:53.118684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.337 [2024-04-25 20:24:53.118688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.337 [2024-04-25 20:24:53.118696] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:29:55.337 00:29:55.337 20:24:53 -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:55.337 [2024-04-25 20:24:53.192480] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:55.337 [2024-04-25 20:24:53.192588] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702621 ] 00:29:55.337 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.337 [2024-04-25 20:24:53.251386] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:55.337 [2024-04-25 20:24:53.251466] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:55.337 [2024-04-25 20:24:53.251481] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:55.337 [2024-04-25 20:24:53.251510] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:55.337 [2024-04-25 20:24:53.251527] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:55.337 [2024-04-25 20:24:53.251939] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:55.337 [2024-04-25 20:24:53.251974] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x613000001fc0 0 00:29:55.607 [2024-04-25 20:24:53.266510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:55.607 [2024-04-25 20:24:53.266530] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:55.607 [2024-04-25 20:24:53.266537] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:55.607 [2024-04-25 20:24:53.266544] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:55.607 [2024-04-25 20:24:53.266583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.607 [2024-04-25 20:24:53.266592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.607 [2024-04-25 20:24:53.266601] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.266623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:55.608 [2024-04-25 20:24:53.266647] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.608 [2024-04-25 20:24:53.274510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.608 [2024-04-25 20:24:53.274525] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.608 [2024-04-25 20:24:53.274530] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.274537] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.608 [2024-04-25 20:24:53.274556] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:55.608 [2024-04-25 20:24:53.274569] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:55.608 [2024-04-25 20:24:53.274577] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:55.608 [2024-04-25 20:24:53.274592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.274599] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.274610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.274626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.608 [2024-04-25 20:24:53.274645] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.608 [2024-04-25 20:24:53.274818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.608 [2024-04-25 20:24:53.274827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.608 [2024-04-25 20:24:53.274838] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.274845] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.608 [2024-04-25 20:24:53.274852] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:55.608 [2024-04-25 20:24:53.274862] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:55.608 [2024-04-25 20:24:53.274870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.274875] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.274887] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.274898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.608 [2024-04-25 20:24:53.274910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.608 [2024-04-25 20:24:53.275052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.608 [2024-04-25 20:24:53.275059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.608 [2024-04-25 20:24:53.275064] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275068] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.608 [2024-04-25 20:24:53.275075] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:55.608 [2024-04-25 20:24:53.275085] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:55.608 [2024-04-25 20:24:53.275093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.275115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.608 [2024-04-25 20:24:53.275126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.608 [2024-04-25 20:24:53.275271] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.608 [2024-04-25 20:24:53.275278] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.608 [2024-04-25 20:24:53.275282] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.608 [2024-04-25 20:24:53.275297] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:55.608 [2024-04-25 20:24:53.275310] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.275332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.608 [2024-04-25 20:24:53.275343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.608 [2024-04-25 20:24:53.275474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.608 [2024-04-25 20:24:53.275481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.608 [2024-04-25 20:24:53.275485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275497] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.608 [2024-04-25 20:24:53.275503] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:55.608 [2024-04-25 20:24:53.275510] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:55.608 [2024-04-25 20:24:53.275519] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:55.608 [2024-04-25 20:24:53.275627] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:55.608 [2024-04-25 20:24:53.275633] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:55.608 [2024-04-25 20:24:53.275644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275651] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.275670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.608 [2024-04-25 20:24:53.275681] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.608 [2024-04-25 20:24:53.275817] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.608 [2024-04-25 20:24:53.275826] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.608 [2024-04-25 20:24:53.275830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275834] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.608 [2024-04-25 20:24:53.275840] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:55.608 [2024-04-25 20:24:53.275851] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275856] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.275861] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.275870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.608 [2024-04-25 20:24:53.275882] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.608 [2024-04-25 20:24:53.276018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.608 [2024-04-25 20:24:53.276025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.608 [2024-04-25 20:24:53.276030] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.276035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.608 [2024-04-25 20:24:53.276041] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:55.608 [2024-04-25 20:24:53.276048] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:55.608 [2024-04-25 20:24:53.276057] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:55.608 [2024-04-25 20:24:53.276070] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:55.608 [2024-04-25 20:24:53.276083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.276088] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.276094] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.276104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.608 [2024-04-25 20:24:53.276115] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.608 [2024-04-25 20:24:53.276306] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.608 [2024-04-25 20:24:53.276314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.608 [2024-04-25 20:24:53.276318] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.276324] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=0 00:29:55.608 [2024-04-25 20:24:53.276333] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:55.608 [2024-04-25 20:24:53.276380] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.276386] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.316790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.608 [2024-04-25 20:24:53.316805] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.608 [2024-04-25 20:24:53.316809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.316814] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.608 [2024-04-25 20:24:53.316829] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:55.608 [2024-04-25 20:24:53.316836] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:55.608 [2024-04-25 20:24:53.316842] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:55.608 [2024-04-25 20:24:53.316848] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:55.608 [2024-04-25 20:24:53.316857] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:55.608 [2024-04-25 20:24:53.316863] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:55.608 [2024-04-25 20:24:53.316873] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:55.608 [2024-04-25 20:24:53.316885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.316891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.316897] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.316907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:55.608 [2024-04-25 20:24:53.316924] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.608 [2024-04-25 20:24:53.317083] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.608 [2024-04-25 20:24:53.317090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.608 [2024-04-25 20:24:53.317094] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.317098] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:55.608 [2024-04-25 20:24:53.317106] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.317112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.608 [2024-04-25 20:24:53.317117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:55.608 [2024-04-25 20:24:53.317126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.608 [2024-04-25 20:24:53.317133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.317148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.609 [2024-04-25 20:24:53.317154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317158] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317162] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.317169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.609 [2024-04-25 20:24:53.317175] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317179] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317183] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.317190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.609 [2024-04-25 20:24:53.317195] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.317207] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.317215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317219] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.317235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.609 [2024-04-25 20:24:53.317248] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:55.609 [2024-04-25 20:24:53.317253] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:29:55.609 [2024-04-25 20:24:53.317258] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:29:55.609 [2024-04-25 20:24:53.317263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.609 [2024-04-25 20:24:53.317268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.609 [2024-04-25 20:24:53.317462] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.609 [2024-04-25 20:24:53.317469] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.609 [2024-04-25 20:24:53.317473] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317477] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.609 [2024-04-25 20:24:53.317483] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:55.609 [2024-04-25 20:24:53.317501] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.317509] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.317521] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.317528] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317535] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317540] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.317549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:55.609 [2024-04-25 20:24:53.317560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.609 [2024-04-25 20:24:53.317698] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.609 [2024-04-25 20:24:53.317704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.609 [2024-04-25 20:24:53.317708] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317712] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.609 [2024-04-25 20:24:53.317760] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.317772] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.317782] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317787] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.317793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.317802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.609 [2024-04-25 20:24:53.317813] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.609 [2024-04-25 20:24:53.317990] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.609 [2024-04-25 20:24:53.317996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.609 [2024-04-25 20:24:53.318001] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318006] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:55.609 [2024-04-25 20:24:53.318011] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:55.609 [2024-04-25 20:24:53.318020] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318025] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.609 [2024-04-25 20:24:53.318109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.609 [2024-04-25 20:24:53.318113] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318118] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.609 [2024-04-25 20:24:53.318134] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:55.609 [2024-04-25 20:24:53.318148] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.318158] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.318167] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318172] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318177] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.318186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.609 [2024-04-25 20:24:53.318196] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.609 [2024-04-25 20:24:53.318358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.609 [2024-04-25 20:24:53.318365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.609 [2024-04-25 20:24:53.318370] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318374] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:55.609 [2024-04-25 20:24:53.318379] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:55.609 [2024-04-25 20:24:53.318387] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318390] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318462] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.609 [2024-04-25 20:24:53.318468] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.609 [2024-04-25 20:24:53.318472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.318477] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.609 [2024-04-25 20:24:53.322497] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.322509] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.322520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.322540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.609 [2024-04-25 20:24:53.322551] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.609 [2024-04-25 20:24:53.322681] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.609 [2024-04-25 20:24:53.322687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.609 [2024-04-25 20:24:53.322691] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322695] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:55.609 [2024-04-25 20:24:53.322701] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:55.609 [2024-04-25 20:24:53.322709] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322714] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.609 [2024-04-25 20:24:53.322771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.609 [2024-04-25 20:24:53.322775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322780] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.609 [2024-04-25 20:24:53.322792] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.322800] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.322809] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.322816] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.322822] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.322829] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:55.609 [2024-04-25 20:24:53.322834] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:55.609 [2024-04-25 20:24:53.322843] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:55.609 [2024-04-25 20:24:53.322868] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322874] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322879] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.322891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.609 [2024-04-25 20:24:53.322899] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.609 [2024-04-25 20:24:53.322909] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:55.609 [2024-04-25 20:24:53.322918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.609 [2024-04-25 20:24:53.322931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.609 [2024-04-25 20:24:53.322937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:55.609 [2024-04-25 20:24:53.323060] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.610 [2024-04-25 20:24:53.323069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.610 [2024-04-25 20:24:53.323074] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323079] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.610 [2024-04-25 20:24:53.323087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.610 [2024-04-25 20:24:53.323095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.610 [2024-04-25 20:24:53.323099] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323103] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:55.610 [2024-04-25 20:24:53.323112] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323115] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:55.610 [2024-04-25 20:24:53.323128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.610 [2024-04-25 20:24:53.323138] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:55.610 [2024-04-25 20:24:53.323247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.610 [2024-04-25 20:24:53.323254] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.610 [2024-04-25 20:24:53.323257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:55.610 [2024-04-25 20:24:53.323270] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323278] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:55.610 [2024-04-25 20:24:53.323285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.610 [2024-04-25 20:24:53.323295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:55.610 [2024-04-25 20:24:53.323419] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.610 [2024-04-25 20:24:53.323425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.610 [2024-04-25 20:24:53.323429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:55.610 [2024-04-25 20:24:53.323442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:55.610 [2024-04-25 20:24:53.323457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.610 [2024-04-25 20:24:53.323466] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:55.610 [2024-04-25 20:24:53.323585] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.610 [2024-04-25 20:24:53.323592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.610 [2024-04-25 20:24:53.323596] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323600] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:55.610 [2024-04-25 20:24:53.323616] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:55.610 [2024-04-25 20:24:53.323636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.610 [2024-04-25 20:24:53.323645] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323655] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:55.610 [2024-04-25 20:24:53.323663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.610 [2024-04-25 20:24:53.323674] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323678] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x613000001fc0) 00:29:55.610 [2024-04-25 20:24:53.323698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.610 [2024-04-25 20:24:53.323708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323713] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323718] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613000001fc0) 00:29:55.610 [2024-04-25 20:24:53.323728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.610 [2024-04-25 20:24:53.323740] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:55.610 [2024-04-25 20:24:53.323746] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:55.610 [2024-04-25 20:24:53.323751] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:29:55.610 [2024-04-25 20:24:53.323756] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:29:55.610 [2024-04-25 20:24:53.323934] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.610 [2024-04-25 20:24:53.323941] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.610 [2024-04-25 20:24:53.323948] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.323953] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=8192, cccid=5 00:29:55.610 [2024-04-25 20:24:53.323959] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x613000001fc0): expected_datao=0, payload_size=8192 00:29:55.610 [2024-04-25 20:24:53.324046] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324051] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.610 [2024-04-25 20:24:53.324064] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.610 [2024-04-25 20:24:53.324068] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324072] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=512, cccid=4 00:29:55.610 [2024-04-25 20:24:53.324077] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=512 00:29:55.610 [2024-04-25 20:24:53.324085] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324089] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.610 [2024-04-25 20:24:53.324105] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.610 [2024-04-25 20:24:53.324109] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324113] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=512, cccid=6 00:29:55.610 [2024-04-25 20:24:53.324118] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x613000001fc0): expected_datao=0, payload_size=512 00:29:55.610 [2024-04-25 20:24:53.324127] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324130] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324137] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:55.610 [2024-04-25 20:24:53.324143] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:55.610 [2024-04-25 20:24:53.324146] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324150] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=7 00:29:55.610 [2024-04-25 20:24:53.324156] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:55.610 [2024-04-25 20:24:53.324164] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324167] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.610 [2024-04-25 20:24:53.324181] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.610 [2024-04-25 20:24:53.324185] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324190] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:55.610 [2024-04-25 20:24:53.324208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.610 [2024-04-25 20:24:53.324215] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.610 [2024-04-25 20:24:53.324218] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324222] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:55.610 [2024-04-25 20:24:53.324235] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.610 [2024-04-25 20:24:53.324241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.610 [2024-04-25 20:24:53.324244] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324248] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x613000001fc0 00:29:55.610 [2024-04-25 20:24:53.324259] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.610 [2024-04-25 20:24:53.324265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.610 [2024-04-25 20:24:53.324268] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.610 [2024-04-25 20:24:53.324272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x613000001fc0 00:29:55.610 ===================================================== 00:29:55.610 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.610 ===================================================== 00:29:55.610 Controller Capabilities/Features 00:29:55.610 ================================ 00:29:55.610 Vendor ID: 8086 00:29:55.610 Subsystem Vendor ID: 8086 00:29:55.610 Serial Number: SPDK00000000000001 00:29:55.610 Model Number: SPDK bdev Controller 00:29:55.610 Firmware Version: 24.01.1 00:29:55.610 Recommended Arb Burst: 6 00:29:55.610 IEEE OUI Identifier: e4 d2 5c 00:29:55.610 Multi-path I/O 00:29:55.610 May have multiple subsystem ports: Yes 00:29:55.610 May have multiple controllers: Yes 00:29:55.610 Associated with SR-IOV VF: No 00:29:55.610 Max Data Transfer Size: 131072 00:29:55.610 Max Number of Namespaces: 32 00:29:55.610 Max Number of I/O Queues: 127 00:29:55.610 NVMe Specification Version (VS): 1.3 00:29:55.610 NVMe Specification Version (Identify): 1.3 00:29:55.610 Maximum Queue Entries: 128 00:29:55.610 Contiguous Queues Required: Yes 00:29:55.610 Arbitration Mechanisms Supported 00:29:55.610 Weighted Round Robin: Not Supported 00:29:55.610 Vendor Specific: Not Supported 00:29:55.611 Reset Timeout: 15000 ms 00:29:55.611 Doorbell Stride: 4 bytes 00:29:55.611 NVM Subsystem Reset: Not Supported 00:29:55.611 Command Sets Supported 00:29:55.611 NVM Command Set: Supported 00:29:55.611 Boot Partition: Not Supported 00:29:55.611 Memory Page Size Minimum: 4096 bytes 00:29:55.611 Memory Page Size Maximum: 4096 bytes 00:29:55.611 Persistent Memory Region: Not Supported 00:29:55.611 Optional Asynchronous Events Supported 00:29:55.611 Namespace Attribute Notices: Supported 00:29:55.611 Firmware Activation Notices: Not Supported 00:29:55.611 ANA Change Notices: Not Supported 00:29:55.611 PLE Aggregate Log Change Notices: Not Supported 00:29:55.611 LBA Status Info Alert Notices: Not Supported 00:29:55.611 EGE Aggregate Log Change Notices: Not Supported 00:29:55.611 Normal NVM Subsystem Shutdown event: Not Supported 00:29:55.611 Zone Descriptor Change Notices: Not Supported 00:29:55.611 Discovery Log Change Notices: Not Supported 00:29:55.611 Controller Attributes 00:29:55.611 128-bit Host Identifier: Supported 00:29:55.611 Non-Operational Permissive Mode: Not Supported 00:29:55.611 NVM Sets: Not Supported 00:29:55.611 Read Recovery Levels: Not Supported 00:29:55.611 Endurance Groups: Not Supported 00:29:55.611 Predictable Latency Mode: Not Supported 00:29:55.611 Traffic Based Keep ALive: Not Supported 00:29:55.611 Namespace Granularity: Not Supported 00:29:55.611 SQ Associations: Not Supported 00:29:55.611 UUID List: Not Supported 00:29:55.611 Multi-Domain Subsystem: Not Supported 00:29:55.611 Fixed Capacity Management: Not Supported 00:29:55.611 Variable Capacity Management: Not Supported 00:29:55.611 Delete Endurance Group: Not Supported 00:29:55.611 Delete NVM Set: Not Supported 00:29:55.611 Extended LBA Formats Supported: Not Supported 00:29:55.611 Flexible Data Placement Supported: Not Supported 00:29:55.611 00:29:55.611 Controller Memory Buffer Support 00:29:55.611 ================================ 00:29:55.611 Supported: No 00:29:55.611 00:29:55.611 Persistent Memory Region Support 00:29:55.611 ================================ 00:29:55.611 Supported: No 00:29:55.611 00:29:55.611 Admin Command Set Attributes 00:29:55.611 ============================ 00:29:55.611 Security Send/Receive: Not Supported 00:29:55.611 Format NVM: Not Supported 00:29:55.611 Firmware Activate/Download: Not Supported 00:29:55.611 Namespace Management: Not Supported 00:29:55.611 Device Self-Test: Not Supported 00:29:55.611 Directives: Not Supported 00:29:55.611 NVMe-MI: Not Supported 00:29:55.611 Virtualization Management: Not Supported 00:29:55.611 Doorbell Buffer Config: Not Supported 00:29:55.611 Get LBA Status Capability: Not Supported 00:29:55.611 Command & Feature Lockdown Capability: Not Supported 00:29:55.611 Abort Command Limit: 4 00:29:55.611 Async Event Request Limit: 4 00:29:55.611 Number of Firmware Slots: N/A 00:29:55.611 Firmware Slot 1 Read-Only: N/A 00:29:55.611 Firmware Activation Without Reset: N/A 00:29:55.611 Multiple Update Detection Support: N/A 00:29:55.611 Firmware Update Granularity: No Information Provided 00:29:55.611 Per-Namespace SMART Log: No 00:29:55.611 Asymmetric Namespace Access Log Page: Not Supported 00:29:55.611 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:55.611 Command Effects Log Page: Supported 00:29:55.611 Get Log Page Extended Data: Supported 00:29:55.611 Telemetry Log Pages: Not Supported 00:29:55.611 Persistent Event Log Pages: Not Supported 00:29:55.611 Supported Log Pages Log Page: May Support 00:29:55.611 Commands Supported & Effects Log Page: Not Supported 00:29:55.611 Feature Identifiers & Effects Log Page:May Support 00:29:55.611 NVMe-MI Commands & Effects Log Page: May Support 00:29:55.611 Data Area 4 for Telemetry Log: Not Supported 00:29:55.611 Error Log Page Entries Supported: 128 00:29:55.611 Keep Alive: Supported 00:29:55.611 Keep Alive Granularity: 10000 ms 00:29:55.611 00:29:55.611 NVM Command Set Attributes 00:29:55.611 ========================== 00:29:55.611 Submission Queue Entry Size 00:29:55.611 Max: 64 00:29:55.611 Min: 64 00:29:55.611 Completion Queue Entry Size 00:29:55.611 Max: 16 00:29:55.611 Min: 16 00:29:55.611 Number of Namespaces: 32 00:29:55.611 Compare Command: Supported 00:29:55.611 Write Uncorrectable Command: Not Supported 00:29:55.611 Dataset Management Command: Supported 00:29:55.611 Write Zeroes Command: Supported 00:29:55.611 Set Features Save Field: Not Supported 00:29:55.611 Reservations: Supported 00:29:55.611 Timestamp: Not Supported 00:29:55.611 Copy: Supported 00:29:55.611 Volatile Write Cache: Present 00:29:55.611 Atomic Write Unit (Normal): 1 00:29:55.611 Atomic Write Unit (PFail): 1 00:29:55.611 Atomic Compare & Write Unit: 1 00:29:55.611 Fused Compare & Write: Supported 00:29:55.611 Scatter-Gather List 00:29:55.611 SGL Command Set: Supported 00:29:55.611 SGL Keyed: Supported 00:29:55.611 SGL Bit Bucket Descriptor: Not Supported 00:29:55.611 SGL Metadata Pointer: Not Supported 00:29:55.611 Oversized SGL: Not Supported 00:29:55.611 SGL Metadata Address: Not Supported 00:29:55.611 SGL Offset: Supported 00:29:55.611 Transport SGL Data Block: Not Supported 00:29:55.611 Replay Protected Memory Block: Not Supported 00:29:55.611 00:29:55.611 Firmware Slot Information 00:29:55.611 ========================= 00:29:55.611 Active slot: 1 00:29:55.611 Slot 1 Firmware Revision: 24.01.1 00:29:55.611 00:29:55.611 00:29:55.611 Commands Supported and Effects 00:29:55.611 ============================== 00:29:55.611 Admin Commands 00:29:55.611 -------------- 00:29:55.611 Get Log Page (02h): Supported 00:29:55.611 Identify (06h): Supported 00:29:55.611 Abort (08h): Supported 00:29:55.611 Set Features (09h): Supported 00:29:55.611 Get Features (0Ah): Supported 00:29:55.611 Asynchronous Event Request (0Ch): Supported 00:29:55.611 Keep Alive (18h): Supported 00:29:55.611 I/O Commands 00:29:55.611 ------------ 00:29:55.611 Flush (00h): Supported LBA-Change 00:29:55.611 Write (01h): Supported LBA-Change 00:29:55.611 Read (02h): Supported 00:29:55.611 Compare (05h): Supported 00:29:55.611 Write Zeroes (08h): Supported LBA-Change 00:29:55.611 Dataset Management (09h): Supported LBA-Change 00:29:55.611 Copy (19h): Supported LBA-Change 00:29:55.611 Unknown (79h): Supported LBA-Change 00:29:55.611 Unknown (7Ah): Supported 00:29:55.611 00:29:55.611 Error Log 00:29:55.611 ========= 00:29:55.611 00:29:55.611 Arbitration 00:29:55.611 =========== 00:29:55.611 Arbitration Burst: 1 00:29:55.611 00:29:55.611 Power Management 00:29:55.611 ================ 00:29:55.611 Number of Power States: 1 00:29:55.611 Current Power State: Power State #0 00:29:55.611 Power State #0: 00:29:55.611 Max Power: 0.00 W 00:29:55.611 Non-Operational State: Operational 00:29:55.611 Entry Latency: Not Reported 00:29:55.611 Exit Latency: Not Reported 00:29:55.611 Relative Read Throughput: 0 00:29:55.611 Relative Read Latency: 0 00:29:55.611 Relative Write Throughput: 0 00:29:55.611 Relative Write Latency: 0 00:29:55.611 Idle Power: Not Reported 00:29:55.611 Active Power: Not Reported 00:29:55.611 Non-Operational Permissive Mode: Not Supported 00:29:55.611 00:29:55.611 Health Information 00:29:55.611 ================== 00:29:55.611 Critical Warnings: 00:29:55.611 Available Spare Space: OK 00:29:55.611 Temperature: OK 00:29:55.611 Device Reliability: OK 00:29:55.611 Read Only: No 00:29:55.611 Volatile Memory Backup: OK 00:29:55.611 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:55.611 Temperature Threshold: [2024-04-25 20:24:53.324410] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.611 [2024-04-25 20:24:53.324416] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.611 [2024-04-25 20:24:53.324421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613000001fc0) 00:29:55.611 [2024-04-25 20:24:53.324430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.611 [2024-04-25 20:24:53.324443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:29:55.611 [2024-04-25 20:24:53.324557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.611 [2024-04-25 20:24:53.324564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.611 [2024-04-25 20:24:53.324569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.611 [2024-04-25 20:24:53.324574] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x613000001fc0 00:29:55.611 [2024-04-25 20:24:53.324616] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:55.611 [2024-04-25 20:24:53.324629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.611 [2024-04-25 20:24:53.324637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.611 [2024-04-25 20:24:53.324644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.611 [2024-04-25 20:24:53.324650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.611 [2024-04-25 20:24:53.324659] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.611 [2024-04-25 20:24:53.324665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.611 [2024-04-25 20:24:53.324671] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.324681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.324694] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.324794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.324803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.324807] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.324813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.324822] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.324827] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.324832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.324840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.324855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.324966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.324972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.324975] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.324980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.324986] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:55.612 [2024-04-25 20:24:53.324992] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:55.612 [2024-04-25 20:24:53.325002] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325007] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.325021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.325032] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.325136] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.325142] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.325145] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325149] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.325159] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325163] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325168] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.325175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.325185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.325287] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.325293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.325297] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325302] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.325311] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325319] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.325327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.325337] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.325435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.325443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.325446] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.325460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325464] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.325478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.325488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.325594] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.325600] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.325604] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325608] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.325617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.325637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.325647] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.325750] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.325756] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.325760] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325764] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.325774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.325790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.325799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.325894] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.325900] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.325904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.325919] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325923] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.325927] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.325934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.325944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.326038] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.326044] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.326048] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.326052] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.326061] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.326065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.326069] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.326077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.326086] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.326182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.326188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.326192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.326196] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.326205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.326209] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.326213] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.612 [2024-04-25 20:24:53.326221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.612 [2024-04-25 20:24:53.326230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.612 [2024-04-25 20:24:53.326329] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.612 [2024-04-25 20:24:53.326336] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.612 [2024-04-25 20:24:53.326340] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.326344] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.612 [2024-04-25 20:24:53.326356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.612 [2024-04-25 20:24:53.326360] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.613 [2024-04-25 20:24:53.326364] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.613 [2024-04-25 20:24:53.326372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.613 [2024-04-25 20:24:53.326383] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.613 [2024-04-25 20:24:53.326476] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.613 [2024-04-25 20:24:53.326482] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.613 [2024-04-25 20:24:53.326486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.613 [2024-04-25 20:24:53.329517] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.613 [2024-04-25 20:24:53.329529] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:55.613 [2024-04-25 20:24:53.329533] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:55.613 [2024-04-25 20:24:53.329537] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:55.613 [2024-04-25 20:24:53.329545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.613 [2024-04-25 20:24:53.329556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:55.613 [2024-04-25 20:24:53.329645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:55.613 [2024-04-25 20:24:53.329651] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:55.613 [2024-04-25 20:24:53.329656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:55.613 [2024-04-25 20:24:53.329660] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:55.613 [2024-04-25 20:24:53.329668] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:29:55.613 0 Kelvin (-273 Celsius) 00:29:55.613 Available Spare: 0% 00:29:55.613 Available Spare Threshold: 0% 00:29:55.613 Life Percentage Used: 0% 00:29:55.613 Data Units Read: 0 00:29:55.613 Data Units Written: 0 00:29:55.613 Host Read Commands: 0 00:29:55.613 Host Write Commands: 0 00:29:55.613 Controller Busy Time: 0 minutes 00:29:55.613 Power Cycles: 0 00:29:55.613 Power On Hours: 0 hours 00:29:55.613 Unsafe Shutdowns: 0 00:29:55.613 Unrecoverable Media Errors: 0 00:29:55.613 Lifetime Error Log Entries: 0 00:29:55.613 Warning Temperature Time: 0 minutes 00:29:55.613 Critical Temperature Time: 0 minutes 00:29:55.613 00:29:55.613 Number of Queues 00:29:55.613 ================ 00:29:55.613 Number of I/O Submission Queues: 127 00:29:55.613 Number of I/O Completion Queues: 127 00:29:55.613 00:29:55.613 Active Namespaces 00:29:55.613 ================= 00:29:55.613 Namespace ID:1 00:29:55.613 Error Recovery Timeout: Unlimited 00:29:55.613 Command Set Identifier: NVM (00h) 00:29:55.613 Deallocate: Supported 00:29:55.613 Deallocated/Unwritten Error: Not Supported 00:29:55.613 Deallocated Read Value: Unknown 00:29:55.613 Deallocate in Write Zeroes: Not Supported 00:29:55.613 Deallocated Guard Field: 0xFFFF 00:29:55.613 Flush: Supported 00:29:55.613 Reservation: Supported 00:29:55.613 Namespace Sharing Capabilities: Multiple Controllers 00:29:55.613 Size (in LBAs): 131072 (0GiB) 00:29:55.613 Capacity (in LBAs): 131072 (0GiB) 00:29:55.613 Utilization (in LBAs): 131072 (0GiB) 00:29:55.613 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:55.613 EUI64: ABCDEF0123456789 00:29:55.613 UUID: 8d755c75-72f2-42a4-b0aa-15856a538e5a 00:29:55.613 Thin Provisioning: Not Supported 00:29:55.613 Per-NS Atomic Units: Yes 00:29:55.613 Atomic Boundary Size (Normal): 0 00:29:55.613 Atomic Boundary Size (PFail): 0 00:29:55.613 Atomic Boundary Offset: 0 00:29:55.613 Maximum Single Source Range Length: 65535 00:29:55.613 Maximum Copy Length: 65535 00:29:55.613 Maximum Source Range Count: 1 00:29:55.613 NGUID/EUI64 Never Reused: No 00:29:55.613 Namespace Write Protected: No 00:29:55.613 Number of LBA Formats: 1 00:29:55.613 Current LBA Format: LBA Format #00 00:29:55.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:55.613 00:29:55.613 20:24:53 -- host/identify.sh@51 -- # sync 00:29:55.613 20:24:53 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.613 20:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.613 20:24:53 -- common/autotest_common.sh@10 -- # set +x 00:29:55.613 20:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.613 20:24:53 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:55.613 20:24:53 -- host/identify.sh@56 -- # nvmftestfini 00:29:55.613 20:24:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:55.613 20:24:53 -- nvmf/common.sh@116 -- # sync 00:29:55.613 20:24:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:55.613 20:24:53 -- nvmf/common.sh@119 -- # set +e 00:29:55.613 20:24:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:55.613 20:24:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:55.613 rmmod nvme_tcp 00:29:55.613 rmmod nvme_fabrics 00:29:55.613 rmmod nvme_keyring 00:29:55.613 20:24:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:55.613 20:24:53 -- nvmf/common.sh@123 -- # set -e 00:29:55.613 20:24:53 -- nvmf/common.sh@124 -- # return 0 00:29:55.613 20:24:53 -- nvmf/common.sh@477 -- # '[' -n 1702306 ']' 00:29:55.613 20:24:53 -- nvmf/common.sh@478 -- # killprocess 1702306 00:29:55.613 20:24:53 -- common/autotest_common.sh@926 -- # '[' -z 1702306 ']' 00:29:55.613 20:24:53 -- common/autotest_common.sh@930 -- # kill -0 1702306 00:29:55.613 20:24:53 -- common/autotest_common.sh@931 -- # uname 00:29:55.613 20:24:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:55.613 20:24:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1702306 00:29:55.613 20:24:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:55.613 20:24:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:55.613 20:24:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1702306' 00:29:55.613 killing process with pid 1702306 00:29:55.613 20:24:53 -- common/autotest_common.sh@945 -- # kill 1702306 00:29:55.613 [2024-04-25 20:24:53.500105] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:55.613 20:24:53 -- common/autotest_common.sh@950 -- # wait 1702306 00:29:56.182 20:24:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:56.182 20:24:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:56.182 20:24:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:56.182 20:24:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:56.182 20:24:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:56.182 20:24:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.182 20:24:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:56.182 20:24:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.166 20:24:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:58.427 00:29:58.427 real 0m9.782s 00:29:58.427 user 0m8.243s 00:29:58.427 sys 0m4.580s 00:29:58.427 20:24:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:58.427 20:24:56 -- common/autotest_common.sh@10 -- # set +x 00:29:58.427 ************************************ 00:29:58.427 END TEST nvmf_identify 00:29:58.427 ************************************ 00:29:58.427 20:24:56 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:58.427 20:24:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:58.427 20:24:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:58.427 20:24:56 -- common/autotest_common.sh@10 -- # set +x 00:29:58.427 ************************************ 00:29:58.427 START TEST nvmf_perf 00:29:58.427 ************************************ 00:29:58.427 20:24:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:58.427 * Looking for test storage... 00:29:58.427 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:58.427 20:24:56 -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.427 20:24:56 -- nvmf/common.sh@7 -- # uname -s 00:29:58.427 20:24:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.427 20:24:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.427 20:24:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.427 20:24:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.427 20:24:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.427 20:24:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.427 20:24:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.427 20:24:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.427 20:24:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.427 20:24:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.428 20:24:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:58.428 20:24:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:58.428 20:24:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.428 20:24:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.428 20:24:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:58.428 20:24:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:58.428 20:24:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.428 20:24:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.428 20:24:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.428 20:24:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.428 20:24:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.428 20:24:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.428 20:24:56 -- paths/export.sh@5 -- # export PATH 00:29:58.428 20:24:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.428 20:24:56 -- nvmf/common.sh@46 -- # : 0 00:29:58.428 20:24:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:58.428 20:24:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:58.428 20:24:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:58.428 20:24:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.428 20:24:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.428 20:24:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:58.428 20:24:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:58.428 20:24:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:58.428 20:24:56 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:58.428 20:24:56 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:58.428 20:24:56 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:29:58.428 20:24:56 -- host/perf.sh@17 -- # nvmftestinit 00:29:58.428 20:24:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:58.428 20:24:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.428 20:24:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:58.428 20:24:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:58.428 20:24:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:58.428 20:24:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.428 20:24:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.428 20:24:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.428 20:24:56 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:58.428 20:24:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:58.428 20:24:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:58.428 20:24:56 -- common/autotest_common.sh@10 -- # set +x 00:30:05.012 20:25:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:05.012 20:25:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:05.012 20:25:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:05.012 20:25:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:05.012 20:25:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:05.012 20:25:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:05.012 20:25:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:05.012 20:25:02 -- nvmf/common.sh@294 -- # net_devs=() 00:30:05.012 20:25:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:05.012 20:25:02 -- nvmf/common.sh@295 -- # e810=() 00:30:05.012 20:25:02 -- nvmf/common.sh@295 -- # local -ga e810 00:30:05.012 20:25:02 -- nvmf/common.sh@296 -- # x722=() 00:30:05.012 20:25:02 -- nvmf/common.sh@296 -- # local -ga x722 00:30:05.012 20:25:02 -- nvmf/common.sh@297 -- # mlx=() 00:30:05.012 20:25:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:05.012 20:25:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.012 20:25:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.012 20:25:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.012 20:25:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.012 20:25:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.012 20:25:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.012 20:25:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.012 20:25:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.013 20:25:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.013 20:25:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.013 20:25:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.013 20:25:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:05.013 20:25:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:05.013 20:25:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:05.013 20:25:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:05.013 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:05.013 20:25:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:05.013 20:25:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:05.013 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:05.013 20:25:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:05.013 20:25:02 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:05.013 20:25:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.013 20:25:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:05.013 20:25:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.013 20:25:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:05.013 Found net devices under 0000:27:00.0: cvl_0_0 00:30:05.013 20:25:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.013 20:25:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:05.013 20:25:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.013 20:25:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:05.013 20:25:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.013 20:25:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:05.013 Found net devices under 0000:27:00.1: cvl_0_1 00:30:05.013 20:25:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.013 20:25:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:05.013 20:25:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:05.013 20:25:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:05.013 20:25:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.013 20:25:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.013 20:25:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.013 20:25:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:05.013 20:25:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.013 20:25:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.013 20:25:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:05.013 20:25:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.013 20:25:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.013 20:25:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:05.013 20:25:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:05.013 20:25:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.013 20:25:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.013 20:25:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.013 20:25:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.013 20:25:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:05.013 20:25:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.013 20:25:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.013 20:25:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.013 20:25:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:05.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:30:05.013 00:30:05.013 --- 10.0.0.2 ping statistics --- 00:30:05.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.013 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:30:05.013 20:25:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:30:05.013 00:30:05.013 --- 10.0.0.1 ping statistics --- 00:30:05.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.013 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:30:05.013 20:25:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.013 20:25:02 -- nvmf/common.sh@410 -- # return 0 00:30:05.013 20:25:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:05.013 20:25:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.013 20:25:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:05.013 20:25:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.013 20:25:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:05.013 20:25:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:05.013 20:25:02 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:05.013 20:25:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:05.013 20:25:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:05.013 20:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:05.013 20:25:02 -- nvmf/common.sh@469 -- # nvmfpid=1706815 00:30:05.013 20:25:02 -- nvmf/common.sh@470 -- # waitforlisten 1706815 00:30:05.013 20:25:02 -- common/autotest_common.sh@819 -- # '[' -z 1706815 ']' 00:30:05.013 20:25:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.013 20:25:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:05.013 20:25:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.013 20:25:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:05.013 20:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:05.013 20:25:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:05.013 [2024-04-25 20:25:02.785103] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:05.013 [2024-04-25 20:25:02.785183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.013 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.013 [2024-04-25 20:25:02.880221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.274 [2024-04-25 20:25:02.979842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:05.274 [2024-04-25 20:25:02.980028] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.274 [2024-04-25 20:25:02.980043] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.274 [2024-04-25 20:25:02.980052] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.274 [2024-04-25 20:25:02.980136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.274 [2024-04-25 20:25:02.980235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.274 [2024-04-25 20:25:02.980334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.274 [2024-04-25 20:25:02.980344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.840 20:25:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:05.840 20:25:03 -- common/autotest_common.sh@852 -- # return 0 00:30:05.840 20:25:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:05.840 20:25:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:05.840 20:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:05.840 20:25:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.840 20:25:03 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:05.840 20:25:03 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:06.777 20:25:04 -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:06.777 20:25:04 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:06.777 20:25:04 -- host/perf.sh@30 -- # local_nvme_trid=0000:03:00.0 00:30:06.777 20:25:04 -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:07.036 20:25:04 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:07.036 20:25:04 -- host/perf.sh@33 -- # '[' -n 0000:03:00.0 ']' 00:30:07.036 20:25:04 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:07.036 20:25:04 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:07.036 20:25:04 -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:07.036 [2024-04-25 20:25:04.863341] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.036 20:25:04 -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:07.296 20:25:05 -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:07.296 20:25:05 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:07.296 20:25:05 -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:07.296 20:25:05 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:07.556 20:25:05 -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.556 [2024-04-25 20:25:05.472667] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.816 20:25:05 -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:07.816 20:25:05 -- host/perf.sh@52 -- # '[' -n 0000:03:00.0 ']' 00:30:07.816 20:25:05 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:30:07.816 20:25:05 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:07.816 20:25:05 -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:30:09.199 Initializing NVMe Controllers 00:30:09.199 Attached to NVMe Controller at 0000:03:00.0 [1344:51c3] 00:30:09.199 Associating PCIE (0000:03:00.0) NSID 1 with lcore 0 00:30:09.199 Initialization complete. Launching workers. 00:30:09.199 ======================================================== 00:30:09.199 Latency(us) 00:30:09.199 Device Information : IOPS MiB/s Average min max 00:30:09.199 PCIE (0000:03:00.0) NSID 1 from core 0: 92254.63 360.37 346.42 82.62 4403.88 00:30:09.199 ======================================================== 00:30:09.199 Total : 92254.63 360.37 346.42 82.62 4403.88 00:30:09.199 00:30:09.199 20:25:07 -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:09.458 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.835 Initializing NVMe Controllers 00:30:10.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:10.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:10.835 Initialization complete. Launching workers. 00:30:10.835 ======================================================== 00:30:10.835 Latency(us) 00:30:10.835 Device Information : IOPS MiB/s Average min max 00:30:10.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 88.00 0.34 11533.33 139.43 45347.89 00:30:10.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.00 0.28 14181.76 7010.89 48022.59 00:30:10.835 ======================================================== 00:30:10.835 Total : 159.00 0.62 12715.96 139.43 48022.59 00:30:10.835 00:30:10.835 20:25:08 -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.835 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.214 Initializing NVMe Controllers 00:30:12.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:12.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:12.214 Initialization complete. Launching workers. 00:30:12.214 ======================================================== 00:30:12.214 Latency(us) 00:30:12.214 Device Information : IOPS MiB/s Average min max 00:30:12.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11380.00 44.45 2814.93 296.50 9601.47 00:30:12.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3951.00 15.43 8157.11 4661.24 16008.37 00:30:12.214 ======================================================== 00:30:12.214 Total : 15331.00 59.89 4191.68 296.50 16008.37 00:30:12.214 00:30:12.214 20:25:09 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:30:12.214 20:25:09 -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.214 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.497 Initializing NVMe Controllers 00:30:15.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.498 Controller IO queue size 128, less than required. 00:30:15.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.498 Controller IO queue size 128, less than required. 00:30:15.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:15.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:15.498 Initialization complete. Launching workers. 00:30:15.498 ======================================================== 00:30:15.498 Latency(us) 00:30:15.498 Device Information : IOPS MiB/s Average min max 00:30:15.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1534.97 383.74 85653.14 48210.03 153752.45 00:30:15.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 619.29 154.82 215278.32 94732.38 305348.48 00:30:15.498 ======================================================== 00:30:15.498 Total : 2154.25 538.56 122916.62 48210.03 305348.48 00:30:15.498 00:30:15.498 20:25:12 -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:15.498 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.498 No valid NVMe controllers or AIO or URING devices found 00:30:15.498 Initializing NVMe Controllers 00:30:15.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.498 Controller IO queue size 128, less than required. 00:30:15.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.498 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:15.498 Controller IO queue size 128, less than required. 00:30:15.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.498 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:15.498 WARNING: Some requested NVMe devices were skipped 00:30:15.498 20:25:12 -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:15.498 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.034 Initializing NVMe Controllers 00:30:18.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.034 Controller IO queue size 128, less than required. 00:30:18.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:18.034 Controller IO queue size 128, less than required. 00:30:18.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:18.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:18.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:18.034 Initialization complete. Launching workers. 00:30:18.034 00:30:18.034 ==================== 00:30:18.034 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:18.034 TCP transport: 00:30:18.034 polls: 30949 00:30:18.034 idle_polls: 9776 00:30:18.034 sock_completions: 21173 00:30:18.034 nvme_completions: 5805 00:30:18.034 submitted_requests: 8903 00:30:18.034 queued_requests: 1 00:30:18.034 00:30:18.034 ==================== 00:30:18.034 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:18.034 TCP transport: 00:30:18.034 polls: 31895 00:30:18.034 idle_polls: 9336 00:30:18.034 sock_completions: 22559 00:30:18.034 nvme_completions: 5727 00:30:18.034 submitted_requests: 8662 00:30:18.034 queued_requests: 1 00:30:18.034 ======================================================== 00:30:18.034 Latency(us) 00:30:18.034 Device Information : IOPS MiB/s Average min max 00:30:18.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1514.50 378.62 87718.29 46665.94 188398.58 00:30:18.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1495.00 373.75 86258.63 47759.17 142804.62 00:30:18.034 ======================================================== 00:30:18.034 Total : 3009.49 752.37 86993.19 46665.94 188398.58 00:30:18.034 00:30:18.034 20:25:15 -- host/perf.sh@66 -- # sync 00:30:18.034 20:25:15 -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:18.034 20:25:15 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:18.034 20:25:15 -- host/perf.sh@71 -- # '[' -n 0000:03:00.0 ']' 00:30:18.034 20:25:15 -- host/perf.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:18.605 20:25:16 -- host/perf.sh@72 -- # ls_guid=9373876d-611d-44fe-b75f-054bbdd268d0 00:30:18.606 20:25:16 -- host/perf.sh@73 -- # get_lvs_free_mb 9373876d-611d-44fe-b75f-054bbdd268d0 00:30:18.606 20:25:16 -- common/autotest_common.sh@1343 -- # local lvs_uuid=9373876d-611d-44fe-b75f-054bbdd268d0 00:30:18.606 20:25:16 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:18.606 20:25:16 -- common/autotest_common.sh@1345 -- # local fc 00:30:18.606 20:25:16 -- common/autotest_common.sh@1346 -- # local cs 00:30:18.606 20:25:16 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:18.866 20:25:16 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:18.866 { 00:30:18.866 "uuid": "9373876d-611d-44fe-b75f-054bbdd268d0", 00:30:18.866 "name": "lvs_0", 00:30:18.866 "base_bdev": "Nvme0n1", 00:30:18.866 "total_data_clusters": 228704, 00:30:18.866 "free_clusters": 228704, 00:30:18.866 "block_size": 512, 00:30:18.866 "cluster_size": 4194304 00:30:18.866 } 00:30:18.866 ]' 00:30:18.866 20:25:16 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="9373876d-611d-44fe-b75f-054bbdd268d0") .free_clusters' 00:30:18.866 20:25:16 -- common/autotest_common.sh@1348 -- # fc=228704 00:30:18.866 20:25:16 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="9373876d-611d-44fe-b75f-054bbdd268d0") .cluster_size' 00:30:18.866 20:25:16 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:18.866 20:25:16 -- common/autotest_common.sh@1352 -- # free_mb=914816 00:30:18.866 20:25:16 -- common/autotest_common.sh@1353 -- # echo 914816 00:30:18.866 914816 00:30:18.866 20:25:16 -- host/perf.sh@77 -- # '[' 914816 -gt 20480 ']' 00:30:18.866 20:25:16 -- host/perf.sh@78 -- # free_mb=20480 00:30:18.866 20:25:16 -- host/perf.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9373876d-611d-44fe-b75f-054bbdd268d0 lbd_0 20480 00:30:19.124 20:25:16 -- host/perf.sh@80 -- # lb_guid=92c3dc3e-aea1-48b9-82a6-5ebde5d0b31c 00:30:19.124 20:25:16 -- host/perf.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 92c3dc3e-aea1-48b9-82a6-5ebde5d0b31c lvs_n_0 00:30:19.690 20:25:17 -- host/perf.sh@83 -- # ls_nested_guid=66fb7769-521b-4d89-bf59-da9a53ed9869 00:30:19.690 20:25:17 -- host/perf.sh@84 -- # get_lvs_free_mb 66fb7769-521b-4d89-bf59-da9a53ed9869 00:30:19.690 20:25:17 -- common/autotest_common.sh@1343 -- # local lvs_uuid=66fb7769-521b-4d89-bf59-da9a53ed9869 00:30:19.690 20:25:17 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:19.690 20:25:17 -- common/autotest_common.sh@1345 -- # local fc 00:30:19.690 20:25:17 -- common/autotest_common.sh@1346 -- # local cs 00:30:19.690 20:25:17 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:19.690 20:25:17 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:19.690 { 00:30:19.690 "uuid": "9373876d-611d-44fe-b75f-054bbdd268d0", 00:30:19.690 "name": "lvs_0", 00:30:19.690 "base_bdev": "Nvme0n1", 00:30:19.690 "total_data_clusters": 228704, 00:30:19.690 "free_clusters": 223584, 00:30:19.690 "block_size": 512, 00:30:19.690 "cluster_size": 4194304 00:30:19.690 }, 00:30:19.690 { 00:30:19.690 "uuid": "66fb7769-521b-4d89-bf59-da9a53ed9869", 00:30:19.690 "name": "lvs_n_0", 00:30:19.690 "base_bdev": "92c3dc3e-aea1-48b9-82a6-5ebde5d0b31c", 00:30:19.690 "total_data_clusters": 5114, 00:30:19.690 "free_clusters": 5114, 00:30:19.690 "block_size": 512, 00:30:19.690 "cluster_size": 4194304 00:30:19.690 } 00:30:19.690 ]' 00:30:19.690 20:25:17 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="66fb7769-521b-4d89-bf59-da9a53ed9869") .free_clusters' 00:30:19.949 20:25:17 -- common/autotest_common.sh@1348 -- # fc=5114 00:30:19.949 20:25:17 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="66fb7769-521b-4d89-bf59-da9a53ed9869") .cluster_size' 00:30:19.949 20:25:17 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:19.949 20:25:17 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:30:19.949 20:25:17 -- common/autotest_common.sh@1353 -- # echo 20456 00:30:19.949 20456 00:30:19.949 20:25:17 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:19.949 20:25:17 -- host/perf.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 66fb7769-521b-4d89-bf59-da9a53ed9869 lbd_nest_0 20456 00:30:19.949 20:25:17 -- host/perf.sh@88 -- # lb_nested_guid=3073357b-c5ef-486a-b6b6-1628fe60fc2d 00:30:19.949 20:25:17 -- host/perf.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:20.209 20:25:17 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:20.209 20:25:17 -- host/perf.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 3073357b-c5ef-486a-b6b6-1628fe60fc2d 00:30:20.209 20:25:18 -- host/perf.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.470 20:25:18 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:20.470 20:25:18 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:20.470 20:25:18 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:20.470 20:25:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:20.470 20:25:18 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.470 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.751 Initializing NVMe Controllers 00:30:32.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:32.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:32.751 Initialization complete. Launching workers. 00:30:32.751 ======================================================== 00:30:32.751 Latency(us) 00:30:32.751 Device Information : IOPS MiB/s Average min max 00:30:32.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.89 0.02 21326.24 190.73 47774.48 00:30:32.751 ======================================================== 00:30:32.751 Total : 46.89 0.02 21326.24 190.73 47774.48 00:30:32.751 00:30:32.751 20:25:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:32.751 20:25:28 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.751 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.733 Initializing NVMe Controllers 00:30:42.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:42.733 Initialization complete. Launching workers. 00:30:42.733 ======================================================== 00:30:42.733 Latency(us) 00:30:42.733 Device Information : IOPS MiB/s Average min max 00:30:42.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.28 9.41 13283.35 4451.75 55995.85 00:30:42.733 ======================================================== 00:30:42.733 Total : 75.28 9.41 13283.35 4451.75 55995.85 00:30:42.733 00:30:42.733 20:25:39 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:42.733 20:25:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:42.733 20:25:39 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:42.733 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.720 Initializing NVMe Controllers 00:30:52.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:52.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:52.720 Initialization complete. Launching workers. 00:30:52.720 ======================================================== 00:30:52.720 Latency(us) 00:30:52.720 Device Information : IOPS MiB/s Average min max 00:30:52.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9064.66 4.43 3539.28 172.36 52002.57 00:30:52.720 ======================================================== 00:30:52.720 Total : 9064.66 4.43 3539.28 172.36 52002.57 00:30:52.720 00:30:52.720 20:25:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:52.720 20:25:49 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:52.720 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.716 Initializing NVMe Controllers 00:31:02.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:02.716 Initialization complete. Launching workers. 00:31:02.716 ======================================================== 00:31:02.716 Latency(us) 00:31:02.716 Device Information : IOPS MiB/s Average min max 00:31:02.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3396.10 424.51 9422.65 877.52 22258.40 00:31:02.716 ======================================================== 00:31:02.716 Total : 3396.10 424.51 9422.65 877.52 22258.40 00:31:02.716 00:31:02.716 20:26:00 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:02.716 20:26:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:02.716 20:26:00 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:02.716 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.693 Initializing NVMe Controllers 00:31:12.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:12.693 Controller IO queue size 128, less than required. 00:31:12.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:12.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:12.693 Initialization complete. Launching workers. 00:31:12.693 ======================================================== 00:31:12.693 Latency(us) 00:31:12.693 Device Information : IOPS MiB/s Average min max 00:31:12.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16119.01 7.87 7940.85 1369.51 19422.25 00:31:12.693 ======================================================== 00:31:12.693 Total : 16119.01 7.87 7940.85 1369.51 19422.25 00:31:12.693 00:31:12.693 20:26:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:12.693 20:26:10 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:12.693 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.902 Initializing NVMe Controllers 00:31:24.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.902 Controller IO queue size 128, less than required. 00:31:24.902 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:24.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.902 Initialization complete. Launching workers. 00:31:24.902 ======================================================== 00:31:24.902 Latency(us) 00:31:24.902 Device Information : IOPS MiB/s Average min max 00:31:24.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1212.07 151.51 105729.12 23416.54 214975.31 00:31:24.902 ======================================================== 00:31:24.902 Total : 1212.07 151.51 105729.12 23416.54 214975.31 00:31:24.902 00:31:24.902 20:26:20 -- host/perf.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.902 20:26:21 -- host/perf.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3073357b-c5ef-486a-b6b6-1628fe60fc2d 00:31:24.902 20:26:21 -- host/perf.sh@106 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:24.902 20:26:21 -- host/perf.sh@107 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 92c3dc3e-aea1-48b9-82a6-5ebde5d0b31c 00:31:24.902 20:26:21 -- host/perf.sh@108 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:24.902 20:26:22 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:24.902 20:26:22 -- host/perf.sh@114 -- # nvmftestfini 00:31:24.902 20:26:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:24.902 20:26:22 -- nvmf/common.sh@116 -- # sync 00:31:24.902 20:26:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:24.902 20:26:22 -- nvmf/common.sh@119 -- # set +e 00:31:24.902 20:26:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:24.902 20:26:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:24.902 rmmod nvme_tcp 00:31:24.902 rmmod nvme_fabrics 00:31:24.902 rmmod nvme_keyring 00:31:24.902 20:26:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:24.902 20:26:22 -- nvmf/common.sh@123 -- # set -e 00:31:24.902 20:26:22 -- nvmf/common.sh@124 -- # return 0 00:31:24.902 20:26:22 -- nvmf/common.sh@477 -- # '[' -n 1706815 ']' 00:31:24.902 20:26:22 -- nvmf/common.sh@478 -- # killprocess 1706815 00:31:24.902 20:26:22 -- common/autotest_common.sh@926 -- # '[' -z 1706815 ']' 00:31:24.902 20:26:22 -- common/autotest_common.sh@930 -- # kill -0 1706815 00:31:24.902 20:26:22 -- common/autotest_common.sh@931 -- # uname 00:31:24.902 20:26:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:24.902 20:26:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1706815 00:31:24.902 20:26:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:24.902 20:26:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:24.902 20:26:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1706815' 00:31:24.902 killing process with pid 1706815 00:31:24.902 20:26:22 -- common/autotest_common.sh@945 -- # kill 1706815 00:31:24.902 20:26:22 -- common/autotest_common.sh@950 -- # wait 1706815 00:31:25.836 20:26:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:25.836 20:26:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:25.836 20:26:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:25.836 20:26:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:25.836 20:26:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:25.836 20:26:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.836 20:26:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.836 20:26:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.369 20:26:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:28.369 00:31:28.369 real 1m29.550s 00:31:28.369 user 5m19.853s 00:31:28.369 sys 0m12.407s 00:31:28.369 20:26:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:28.369 20:26:25 -- common/autotest_common.sh@10 -- # set +x 00:31:28.369 ************************************ 00:31:28.369 END TEST nvmf_perf 00:31:28.369 ************************************ 00:31:28.369 20:26:25 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:28.369 20:26:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:28.369 20:26:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:28.369 20:26:25 -- common/autotest_common.sh@10 -- # set +x 00:31:28.369 ************************************ 00:31:28.369 START TEST nvmf_fio_host 00:31:28.369 ************************************ 00:31:28.369 20:26:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:28.369 * Looking for test storage... 00:31:28.369 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:31:28.369 20:26:25 -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:28.369 20:26:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.369 20:26:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.369 20:26:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.369 20:26:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.369 20:26:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.370 20:26:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.370 20:26:25 -- paths/export.sh@5 -- # export PATH 00:31:28.370 20:26:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.370 20:26:25 -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.370 20:26:25 -- nvmf/common.sh@7 -- # uname -s 00:31:28.370 20:26:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.370 20:26:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.370 20:26:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.370 20:26:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.370 20:26:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.370 20:26:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.370 20:26:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.370 20:26:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.370 20:26:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.370 20:26:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.370 20:26:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:31:28.370 20:26:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:31:28.370 20:26:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.370 20:26:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.370 20:26:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:28.370 20:26:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:28.370 20:26:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.370 20:26:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.370 20:26:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.370 20:26:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.370 20:26:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.370 20:26:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.370 20:26:25 -- paths/export.sh@5 -- # export PATH 00:31:28.370 20:26:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.370 20:26:25 -- nvmf/common.sh@46 -- # : 0 00:31:28.370 20:26:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:28.370 20:26:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:28.370 20:26:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:28.370 20:26:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.370 20:26:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.370 20:26:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:28.370 20:26:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:28.370 20:26:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:28.370 20:26:25 -- host/fio.sh@12 -- # nvmftestinit 00:31:28.370 20:26:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:28.370 20:26:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.370 20:26:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:28.370 20:26:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:28.370 20:26:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:28.370 20:26:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.370 20:26:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.370 20:26:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.370 20:26:25 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:31:28.370 20:26:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:28.370 20:26:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:28.370 20:26:25 -- common/autotest_common.sh@10 -- # set +x 00:31:32.636 20:26:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:32.637 20:26:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:32.637 20:26:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:32.637 20:26:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:32.637 20:26:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:32.637 20:26:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:32.637 20:26:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:32.637 20:26:30 -- nvmf/common.sh@294 -- # net_devs=() 00:31:32.637 20:26:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:32.637 20:26:30 -- nvmf/common.sh@295 -- # e810=() 00:31:32.637 20:26:30 -- nvmf/common.sh@295 -- # local -ga e810 00:31:32.637 20:26:30 -- nvmf/common.sh@296 -- # x722=() 00:31:32.637 20:26:30 -- nvmf/common.sh@296 -- # local -ga x722 00:31:32.637 20:26:30 -- nvmf/common.sh@297 -- # mlx=() 00:31:32.637 20:26:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:32.637 20:26:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.637 20:26:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:32.637 20:26:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:32.637 20:26:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:32.637 20:26:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:31:32.637 Found 0000:27:00.0 (0x8086 - 0x159b) 00:31:32.637 20:26:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:32.637 20:26:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:31:32.637 Found 0000:27:00.1 (0x8086 - 0x159b) 00:31:32.637 20:26:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:32.637 20:26:30 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:32.637 20:26:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.637 20:26:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:32.637 20:26:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.637 20:26:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:31:32.637 Found net devices under 0000:27:00.0: cvl_0_0 00:31:32.637 20:26:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.637 20:26:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:32.637 20:26:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.637 20:26:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:32.637 20:26:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.637 20:26:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:31:32.637 Found net devices under 0000:27:00.1: cvl_0_1 00:31:32.637 20:26:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.637 20:26:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:32.637 20:26:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:32.637 20:26:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:32.637 20:26:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:32.637 20:26:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.637 20:26:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.637 20:26:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.637 20:26:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:32.637 20:26:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.637 20:26:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.637 20:26:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:32.637 20:26:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.637 20:26:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.637 20:26:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:32.637 20:26:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:32.637 20:26:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.897 20:26:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.897 20:26:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.897 20:26:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.897 20:26:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:32.897 20:26:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.897 20:26:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.897 20:26:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.897 20:26:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:32.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:31:32.897 00:31:32.897 --- 10.0.0.2 ping statistics --- 00:31:32.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.897 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:31:32.897 20:26:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:31:32.897 00:31:32.897 --- 10.0.0.1 ping statistics --- 00:31:32.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.897 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:31:32.897 20:26:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.897 20:26:30 -- nvmf/common.sh@410 -- # return 0 00:31:32.897 20:26:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:32.897 20:26:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.897 20:26:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:32.897 20:26:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:32.897 20:26:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.897 20:26:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:32.897 20:26:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:33.157 20:26:30 -- host/fio.sh@14 -- # [[ y != y ]] 00:31:33.157 20:26:30 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:31:33.157 20:26:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:33.157 20:26:30 -- common/autotest_common.sh@10 -- # set +x 00:31:33.157 20:26:30 -- host/fio.sh@22 -- # nvmfpid=1726156 00:31:33.157 20:26:30 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:33.157 20:26:30 -- host/fio.sh@26 -- # waitforlisten 1726156 00:31:33.157 20:26:30 -- common/autotest_common.sh@819 -- # '[' -z 1726156 ']' 00:31:33.157 20:26:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.157 20:26:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:33.157 20:26:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.157 20:26:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:33.157 20:26:30 -- common/autotest_common.sh@10 -- # set +x 00:31:33.157 20:26:30 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:33.157 [2024-04-25 20:26:30.918151] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:33.157 [2024-04-25 20:26:30.918283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.157 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.157 [2024-04-25 20:26:31.050169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.418 [2024-04-25 20:26:31.145408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:33.418 [2024-04-25 20:26:31.145625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.418 [2024-04-25 20:26:31.145639] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.418 [2024-04-25 20:26:31.145650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.418 [2024-04-25 20:26:31.145726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.418 [2024-04-25 20:26:31.145830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.418 [2024-04-25 20:26:31.145845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.418 [2024-04-25 20:26:31.145849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:33.989 20:26:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:33.989 20:26:31 -- common/autotest_common.sh@852 -- # return 0 00:31:33.989 20:26:31 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:33.989 20:26:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.989 20:26:31 -- common/autotest_common.sh@10 -- # set +x 00:31:33.989 [2024-04-25 20:26:31.630551] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.989 20:26:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.989 20:26:31 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:31:33.989 20:26:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:33.989 20:26:31 -- common/autotest_common.sh@10 -- # set +x 00:31:33.989 20:26:31 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:33.989 20:26:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.989 20:26:31 -- common/autotest_common.sh@10 -- # set +x 00:31:33.989 Malloc1 00:31:33.989 20:26:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.989 20:26:31 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:33.989 20:26:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.989 20:26:31 -- common/autotest_common.sh@10 -- # set +x 00:31:33.989 20:26:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.989 20:26:31 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:33.989 20:26:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.989 20:26:31 -- common/autotest_common.sh@10 -- # set +x 00:31:33.989 20:26:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.989 20:26:31 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.989 20:26:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.989 20:26:31 -- common/autotest_common.sh@10 -- # set +x 00:31:33.989 [2024-04-25 20:26:31.739472] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.989 20:26:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.989 20:26:31 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:33.989 20:26:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.989 20:26:31 -- common/autotest_common.sh@10 -- # set +x 00:31:33.989 20:26:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.989 20:26:31 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:31:33.989 20:26:31 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:33.989 20:26:31 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:33.989 20:26:31 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:33.989 20:26:31 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:33.989 20:26:31 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:33.989 20:26:31 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.989 20:26:31 -- common/autotest_common.sh@1320 -- # shift 00:31:33.989 20:26:31 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:33.989 20:26:31 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.989 20:26:31 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.989 20:26:31 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:33.989 20:26:31 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:33.989 20:26:31 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:33.989 20:26:31 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:33.989 20:26:31 -- common/autotest_common.sh@1326 -- # break 00:31:33.989 20:26:31 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:33.989 20:26:31 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.249 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:34.249 fio-3.35 00:31:34.249 Starting 1 thread 00:31:34.507 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.036 00:31:37.036 test: (groupid=0, jobs=1): err= 0: pid=1726651: Thu Apr 25 20:26:34 2024 00:31:37.036 read: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(101MiB/2004msec) 00:31:37.036 slat (nsec): min=1572, max=134202, avg=1967.71, stdev=1249.42 00:31:37.036 clat (usec): min=2474, max=9379, avg=5447.34, stdev=395.63 00:31:37.036 lat (usec): min=2498, max=9381, avg=5449.31, stdev=395.55 00:31:37.036 clat percentiles (usec): 00:31:37.036 | 1.00th=[ 4621], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5145], 00:31:37.036 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5538], 00:31:37.036 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 6063], 00:31:37.036 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7701], 99.95th=[ 8586], 00:31:37.036 | 99.99th=[ 9372] 00:31:37.036 bw ( KiB/s): min=50488, max=52520, per=99.93%, avg=51786.00, stdev=892.19, samples=4 00:31:37.036 iops : min=12622, max=13130, avg=12946.50, stdev=223.05, samples=4 00:31:37.036 write: IOPS=12.9k, BW=50.5MiB/s (53.0MB/s)(101MiB/2004msec); 0 zone resets 00:31:37.036 slat (nsec): min=1614, max=123418, avg=2057.40, stdev=1004.37 00:31:37.036 clat (usec): min=1418, max=8703, avg=4379.10, stdev=339.07 00:31:37.036 lat (usec): min=1430, max=8704, avg=4381.15, stdev=339.05 00:31:37.036 clat percentiles (usec): 00:31:37.036 | 1.00th=[ 3621], 5.00th=[ 3884], 10.00th=[ 3982], 20.00th=[ 4146], 00:31:37.036 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:31:37.036 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4883], 00:31:37.036 | 99.00th=[ 5211], 99.50th=[ 5538], 99.90th=[ 6718], 99.95th=[ 7308], 00:31:37.036 | 99.99th=[ 8586] 00:31:37.036 bw ( KiB/s): min=51032, max=52096, per=99.98%, avg=51734.00, stdev=501.79, samples=4 00:31:37.036 iops : min=12758, max=13024, avg=12933.50, stdev=125.45, samples=4 00:31:37.036 lat (msec) : 2=0.02%, 4=5.18%, 10=94.80% 00:31:37.036 cpu : usr=85.27%, sys=14.43%, ctx=4, majf=0, minf=1525 00:31:37.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:37.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:37.036 issued rwts: total=25962,25925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.036 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:37.036 00:31:37.036 Run status group 0 (all jobs): 00:31:37.036 READ: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=101MiB (106MB), run=2004-2004msec 00:31:37.036 WRITE: bw=50.5MiB/s (53.0MB/s), 50.5MiB/s-50.5MiB/s (53.0MB/s-53.0MB/s), io=101MiB (106MB), run=2004-2004msec 00:31:37.036 ----------------------------------------------------- 00:31:37.036 Suppressions used: 00:31:37.036 count bytes template 00:31:37.036 1 57 /usr/src/fio/parse.c 00:31:37.036 1 8 libtcmalloc_minimal.so 00:31:37.036 ----------------------------------------------------- 00:31:37.036 00:31:37.036 20:26:34 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:37.036 20:26:34 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:37.036 20:26:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:37.036 20:26:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:37.036 20:26:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:37.036 20:26:34 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:37.036 20:26:34 -- common/autotest_common.sh@1320 -- # shift 00:31:37.036 20:26:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:37.036 20:26:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.036 20:26:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:37.036 20:26:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:37.036 20:26:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:37.036 20:26:34 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:37.036 20:26:34 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:37.036 20:26:34 -- common/autotest_common.sh@1326 -- # break 00:31:37.036 20:26:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:37.036 20:26:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:37.602 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:37.602 fio-3.35 00:31:37.602 Starting 1 thread 00:31:37.602 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.134 00:31:40.134 test: (groupid=0, jobs=1): err= 0: pid=1727484: Thu Apr 25 20:26:37 2024 00:31:40.134 read: IOPS=11.3k, BW=176MiB/s (185MB/s)(353MiB/2006msec) 00:31:40.134 slat (nsec): min=2580, max=80494, avg=2878.70, stdev=1085.95 00:31:40.134 clat (usec): min=1351, max=13482, avg=6662.09, stdev=1616.64 00:31:40.134 lat (usec): min=1354, max=13485, avg=6664.97, stdev=1616.80 00:31:40.134 clat percentiles (usec): 00:31:40.134 | 1.00th=[ 3556], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 5145], 00:31:40.134 | 30.00th=[ 5669], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 7177], 00:31:40.134 | 70.00th=[ 7635], 80.00th=[ 8029], 90.00th=[ 8586], 95.00th=[ 9241], 00:31:40.134 | 99.00th=[10945], 99.50th=[11207], 99.90th=[12911], 99.95th=[13173], 00:31:40.134 | 99.99th=[13435] 00:31:40.134 bw ( KiB/s): min=85120, max=98176, per=51.00%, avg=91920.00, stdev=5484.65, samples=4 00:31:40.134 iops : min= 5320, max= 6136, avg=5745.00, stdev=342.79, samples=4 00:31:40.134 write: IOPS=6757, BW=106MiB/s (111MB/s)(188MiB/1778msec); 0 zone resets 00:31:40.134 slat (usec): min=28, max=130, avg=30.44, stdev= 3.47 00:31:40.134 clat (usec): min=3337, max=16248, avg=8082.69, stdev=1399.19 00:31:40.134 lat (usec): min=3365, max=16277, avg=8113.13, stdev=1399.79 00:31:40.134 clat percentiles (usec): 00:31:40.134 | 1.00th=[ 5407], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 6915], 00:31:40.134 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8225], 00:31:40.134 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[10028], 95.00th=[10552], 00:31:40.134 | 99.00th=[11731], 99.50th=[12780], 99.90th=[13566], 99.95th=[13566], 00:31:40.134 | 99.99th=[16188] 00:31:40.134 bw ( KiB/s): min=89664, max=102528, per=88.43%, avg=95616.00, stdev=5698.94, samples=4 00:31:40.134 iops : min= 5604, max= 6408, avg=5976.00, stdev=356.18, samples=4 00:31:40.134 lat (msec) : 2=0.02%, 4=2.12%, 10=92.55%, 20=5.30% 00:31:40.134 cpu : usr=86.93%, sys=12.57%, ctx=11, majf=0, minf=2488 00:31:40.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:31:40.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.134 issued rwts: total=22599,12015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.134 00:31:40.134 Run status group 0 (all jobs): 00:31:40.134 READ: bw=176MiB/s (185MB/s), 176MiB/s-176MiB/s (185MB/s-185MB/s), io=353MiB (370MB), run=2006-2006msec 00:31:40.134 WRITE: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=188MiB (197MB), run=1778-1778msec 00:31:40.134 ----------------------------------------------------- 00:31:40.134 Suppressions used: 00:31:40.134 count bytes template 00:31:40.134 1 57 /usr/src/fio/parse.c 00:31:40.134 942 90432 /usr/src/fio/iolog.c 00:31:40.134 1 8 libtcmalloc_minimal.so 00:31:40.134 ----------------------------------------------------- 00:31:40.134 00:31:40.134 20:26:37 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.134 20:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.134 20:26:37 -- common/autotest_common.sh@10 -- # set +x 00:31:40.134 20:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.134 20:26:37 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:31:40.134 20:26:37 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:31:40.134 20:26:37 -- host/fio.sh@49 -- # get_nvme_bdfs 00:31:40.134 20:26:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:40.134 20:26:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:40.134 20:26:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:40.134 20:26:37 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:40.134 20:26:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:40.393 20:26:38 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:31:40.393 20:26:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:31:40.393 20:26:38 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:03:00.0 -i 10.0.0.2 00:31:40.393 20:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.393 20:26:38 -- common/autotest_common.sh@10 -- # set +x 00:31:40.652 Nvme0n1 00:31:40.652 20:26:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.652 20:26:38 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:40.652 20:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.652 20:26:38 -- common/autotest_common.sh@10 -- # set +x 00:31:41.220 20:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.220 20:26:39 -- host/fio.sh@51 -- # ls_guid=28f6b9c2-6095-4fc5-a26b-b16aeb9d19f7 00:31:41.220 20:26:39 -- host/fio.sh@52 -- # get_lvs_free_mb 28f6b9c2-6095-4fc5-a26b-b16aeb9d19f7 00:31:41.220 20:26:39 -- common/autotest_common.sh@1343 -- # local lvs_uuid=28f6b9c2-6095-4fc5-a26b-b16aeb9d19f7 00:31:41.220 20:26:39 -- common/autotest_common.sh@1344 -- # local lvs_info 00:31:41.220 20:26:39 -- common/autotest_common.sh@1345 -- # local fc 00:31:41.220 20:26:39 -- common/autotest_common.sh@1346 -- # local cs 00:31:41.220 20:26:39 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:31:41.220 20:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.220 20:26:39 -- common/autotest_common.sh@10 -- # set +x 00:31:41.220 20:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.220 20:26:39 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:31:41.220 { 00:31:41.220 "uuid": "28f6b9c2-6095-4fc5-a26b-b16aeb9d19f7", 00:31:41.220 "name": "lvs_0", 00:31:41.220 "base_bdev": "Nvme0n1", 00:31:41.220 "total_data_clusters": 893, 00:31:41.220 "free_clusters": 893, 00:31:41.220 "block_size": 512, 00:31:41.220 "cluster_size": 1073741824 00:31:41.220 } 00:31:41.220 ]' 00:31:41.220 20:26:39 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="28f6b9c2-6095-4fc5-a26b-b16aeb9d19f7") .free_clusters' 00:31:41.220 20:26:39 -- common/autotest_common.sh@1348 -- # fc=893 00:31:41.220 20:26:39 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="28f6b9c2-6095-4fc5-a26b-b16aeb9d19f7") .cluster_size' 00:31:41.220 20:26:39 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:31:41.220 20:26:39 -- common/autotest_common.sh@1352 -- # free_mb=914432 00:31:41.220 20:26:39 -- common/autotest_common.sh@1353 -- # echo 914432 00:31:41.220 914432 00:31:41.220 20:26:39 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 914432 00:31:41.220 20:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.220 20:26:39 -- common/autotest_common.sh@10 -- # set +x 00:31:41.220 b9488902-659b-43db-9cde-d384312efddb 00:31:41.220 20:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.220 20:26:39 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:41.220 20:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.220 20:26:39 -- common/autotest_common.sh@10 -- # set +x 00:31:41.491 20:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.491 20:26:39 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:41.491 20:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.491 20:26:39 -- common/autotest_common.sh@10 -- # set +x 00:31:41.491 20:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.491 20:26:39 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:41.491 20:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.491 20:26:39 -- common/autotest_common.sh@10 -- # set +x 00:31:41.491 20:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.491 20:26:39 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.491 20:26:39 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.491 20:26:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:41.491 20:26:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:41.491 20:26:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:41.491 20:26:39 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:41.491 20:26:39 -- common/autotest_common.sh@1320 -- # shift 00:31:41.491 20:26:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:41.491 20:26:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.491 20:26:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:41.491 20:26:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:41.491 20:26:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:41.491 20:26:39 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:41.491 20:26:39 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:41.491 20:26:39 -- common/autotest_common.sh@1326 -- # break 00:31:41.491 20:26:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:41.491 20:26:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.750 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:41.750 fio-3.35 00:31:41.750 Starting 1 thread 00:31:41.750 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.283 00:31:44.283 test: (groupid=0, jobs=1): err= 0: pid=1728321: Thu Apr 25 20:26:41 2024 00:31:44.283 read: IOPS=9642, BW=37.7MiB/s (39.5MB/s)(75.5MiB/2005msec) 00:31:44.283 slat (nsec): min=1594, max=144882, avg=2522.26, stdev=1539.63 00:31:44.283 clat (usec): min=3579, max=12246, avg=7304.08, stdev=632.89 00:31:44.283 lat (usec): min=3612, max=12248, avg=7306.60, stdev=632.78 00:31:44.283 clat percentiles (usec): 00:31:44.283 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6783], 00:31:44.283 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:31:44.283 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8356], 00:31:44.283 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[10552], 99.95th=[11338], 00:31:44.283 | 99.99th=[12256] 00:31:44.283 bw ( KiB/s): min=36760, max=39904, per=99.91%, avg=38538.00, stdev=1325.26, samples=4 00:31:44.283 iops : min= 9190, max= 9976, avg=9634.50, stdev=331.31, samples=4 00:31:44.283 write: IOPS=9652, BW=37.7MiB/s (39.5MB/s)(75.6MiB/2005msec); 0 zone resets 00:31:44.283 slat (nsec): min=1659, max=130533, avg=2636.02, stdev=1185.00 00:31:44.283 clat (usec): min=1999, max=9662, avg=5866.68, stdev=542.26 00:31:44.283 lat (usec): min=2014, max=9664, avg=5869.31, stdev=542.22 00:31:44.283 clat percentiles (usec): 00:31:44.283 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5473], 00:31:44.283 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 5932], 00:31:44.283 | 70.00th=[ 6063], 80.00th=[ 6259], 90.00th=[ 6521], 95.00th=[ 6718], 00:31:44.283 | 99.00th=[ 7504], 99.50th=[ 7767], 99.90th=[ 8717], 99.95th=[ 9372], 00:31:44.283 | 99.99th=[ 9634] 00:31:44.283 bw ( KiB/s): min=37656, max=39424, per=99.93%, avg=38582.00, stdev=760.67, samples=4 00:31:44.283 iops : min= 9414, max= 9856, avg=9645.50, stdev=190.17, samples=4 00:31:44.283 lat (msec) : 2=0.01%, 4=0.12%, 10=99.76%, 20=0.11% 00:31:44.283 cpu : usr=86.03%, sys=13.57%, ctx=5, majf=0, minf=1522 00:31:44.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:44.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.283 issued rwts: total=19334,19353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.283 00:31:44.283 Run status group 0 (all jobs): 00:31:44.283 READ: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=75.5MiB (79.2MB), run=2005-2005msec 00:31:44.283 WRITE: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=75.6MiB (79.3MB), run=2005-2005msec 00:31:44.283 ----------------------------------------------------- 00:31:44.283 Suppressions used: 00:31:44.283 count bytes template 00:31:44.283 1 58 /usr/src/fio/parse.c 00:31:44.283 1 8 libtcmalloc_minimal.so 00:31:44.283 ----------------------------------------------------- 00:31:44.283 00:31:44.283 20:26:42 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:44.283 20:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.283 20:26:42 -- common/autotest_common.sh@10 -- # set +x 00:31:44.283 20:26:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.283 20:26:42 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:44.283 20:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.283 20:26:42 -- common/autotest_common.sh@10 -- # set +x 00:31:44.283 20:26:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.283 20:26:42 -- host/fio.sh@62 -- # ls_nested_guid=3ebb43e6-ae69-4b8b-945c-edceb5562c48 00:31:44.283 20:26:42 -- host/fio.sh@63 -- # get_lvs_free_mb 3ebb43e6-ae69-4b8b-945c-edceb5562c48 00:31:44.283 20:26:42 -- common/autotest_common.sh@1343 -- # local lvs_uuid=3ebb43e6-ae69-4b8b-945c-edceb5562c48 00:31:44.283 20:26:42 -- common/autotest_common.sh@1344 -- # local lvs_info 00:31:44.283 20:26:42 -- common/autotest_common.sh@1345 -- # local fc 00:31:44.283 20:26:42 -- common/autotest_common.sh@1346 -- # local cs 00:31:44.283 20:26:42 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:31:44.283 20:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.283 20:26:42 -- common/autotest_common.sh@10 -- # set +x 00:31:44.283 20:26:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.283 20:26:42 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:31:44.283 { 00:31:44.283 "uuid": "28f6b9c2-6095-4fc5-a26b-b16aeb9d19f7", 00:31:44.283 "name": "lvs_0", 00:31:44.283 "base_bdev": "Nvme0n1", 00:31:44.283 "total_data_clusters": 893, 00:31:44.283 "free_clusters": 0, 00:31:44.283 "block_size": 512, 00:31:44.283 "cluster_size": 1073741824 00:31:44.283 }, 00:31:44.283 { 00:31:44.283 "uuid": "3ebb43e6-ae69-4b8b-945c-edceb5562c48", 00:31:44.283 "name": "lvs_n_0", 00:31:44.283 "base_bdev": "b9488902-659b-43db-9cde-d384312efddb", 00:31:44.283 "total_data_clusters": 228384, 00:31:44.283 "free_clusters": 228384, 00:31:44.283 "block_size": 512, 00:31:44.283 "cluster_size": 4194304 00:31:44.283 } 00:31:44.283 ]' 00:31:44.283 20:26:42 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="3ebb43e6-ae69-4b8b-945c-edceb5562c48") .free_clusters' 00:31:44.283 20:26:42 -- common/autotest_common.sh@1348 -- # fc=228384 00:31:44.283 20:26:42 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="3ebb43e6-ae69-4b8b-945c-edceb5562c48") .cluster_size' 00:31:44.283 20:26:42 -- common/autotest_common.sh@1349 -- # cs=4194304 00:31:44.283 20:26:42 -- common/autotest_common.sh@1352 -- # free_mb=913536 00:31:44.283 20:26:42 -- common/autotest_common.sh@1353 -- # echo 913536 00:31:44.283 913536 00:31:44.283 20:26:42 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 913536 00:31:44.283 20:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.283 20:26:42 -- common/autotest_common.sh@10 -- # set +x 00:31:45.219 71e9e8de-33aa-43c7-86bf-44b81e944095 00:31:45.219 20:26:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.219 20:26:42 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:45.219 20:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.219 20:26:42 -- common/autotest_common.sh@10 -- # set +x 00:31:45.219 20:26:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.219 20:26:42 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:45.219 20:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.219 20:26:42 -- common/autotest_common.sh@10 -- # set +x 00:31:45.219 20:26:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.219 20:26:42 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:45.219 20:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.219 20:26:42 -- common/autotest_common.sh@10 -- # set +x 00:31:45.219 20:26:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.219 20:26:42 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.219 20:26:42 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.219 20:26:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:45.219 20:26:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:45.220 20:26:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:45.220 20:26:42 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:45.220 20:26:42 -- common/autotest_common.sh@1320 -- # shift 00:31:45.220 20:26:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:45.220 20:26:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.220 20:26:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:45.220 20:26:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:45.220 20:26:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:45.220 20:26:42 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:45.220 20:26:42 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:45.220 20:26:42 -- common/autotest_common.sh@1326 -- # break 00:31:45.220 20:26:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:45.220 20:26:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.479 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:45.479 fio-3.35 00:31:45.479 Starting 1 thread 00:31:45.479 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.005 00:31:48.005 test: (groupid=0, jobs=1): err= 0: pid=1729291: Thu Apr 25 20:26:45 2024 00:31:48.005 read: IOPS=8656, BW=33.8MiB/s (35.5MB/s)(67.8MiB/2006msec) 00:31:48.005 slat (nsec): min=1607, max=97511, avg=1870.09, stdev=1047.78 00:31:48.005 clat (usec): min=3868, max=13797, avg=8198.71, stdev=674.44 00:31:48.005 lat (usec): min=3884, max=13799, avg=8200.58, stdev=674.39 00:31:48.005 clat percentiles (usec): 00:31:48.005 | 1.00th=[ 6718], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7635], 00:31:48.005 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8356], 00:31:48.005 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9241], 00:31:48.005 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[11469], 99.95th=[12125], 00:31:48.005 | 99.99th=[13042] 00:31:48.005 bw ( KiB/s): min=33652, max=35160, per=99.83%, avg=34565.00, stdev=651.41, samples=4 00:31:48.005 iops : min= 8413, max= 8790, avg=8641.25, stdev=162.85, samples=4 00:31:48.005 write: IOPS=8646, BW=33.8MiB/s (35.4MB/s)(67.8MiB/2006msec); 0 zone resets 00:31:48.005 slat (nsec): min=1664, max=79604, avg=1970.65, stdev=673.54 00:31:48.005 clat (usec): min=1680, max=12962, avg=6525.00, stdev=598.06 00:31:48.005 lat (usec): min=1690, max=12964, avg=6526.97, stdev=598.03 00:31:48.005 clat percentiles (usec): 00:31:48.005 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5866], 20.00th=[ 6063], 00:31:48.005 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:31:48.005 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7439], 00:31:48.005 | 99.00th=[ 7963], 99.50th=[ 8586], 99.90th=[10028], 99.95th=[11076], 00:31:48.005 | 99.99th=[12125] 00:31:48.005 bw ( KiB/s): min=34304, max=34944, per=99.94%, avg=34564.50, stdev=292.40, samples=4 00:31:48.005 iops : min= 8576, max= 8736, avg=8641.00, stdev=73.06, samples=4 00:31:48.005 lat (msec) : 2=0.01%, 4=0.10%, 10=99.40%, 20=0.48% 00:31:48.005 cpu : usr=86.68%, sys=12.97%, ctx=4, majf=0, minf=1522 00:31:48.005 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:48.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.005 issued rwts: total=17364,17345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.005 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.005 00:31:48.005 Run status group 0 (all jobs): 00:31:48.005 READ: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=67.8MiB (71.1MB), run=2006-2006msec 00:31:48.005 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=67.8MiB (71.0MB), run=2006-2006msec 00:31:48.005 ----------------------------------------------------- 00:31:48.005 Suppressions used: 00:31:48.005 count bytes template 00:31:48.005 1 58 /usr/src/fio/parse.c 00:31:48.005 1 8 libtcmalloc_minimal.so 00:31:48.005 ----------------------------------------------------- 00:31:48.005 00:31:48.005 20:26:45 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:48.005 20:26:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.005 20:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:48.005 20:26:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.005 20:26:45 -- host/fio.sh@72 -- # sync 00:31:48.005 20:26:45 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:48.005 20:26:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.005 20:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:49.379 20:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.379 20:26:47 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:31:49.379 20:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.379 20:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:49.379 20:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.379 20:26:47 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:31:49.379 20:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.379 20:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:49.945 20:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.945 20:26:47 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:31:49.945 20:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.945 20:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:49.945 20:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.945 20:26:47 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:31:49.945 20:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.945 20:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:50.511 20:26:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.511 20:26:48 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:31:50.511 20:26:48 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:31:50.511 20:26:48 -- host/fio.sh@84 -- # nvmftestfini 00:31:50.511 20:26:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:50.511 20:26:48 -- nvmf/common.sh@116 -- # sync 00:31:50.511 20:26:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:50.511 20:26:48 -- nvmf/common.sh@119 -- # set +e 00:31:50.511 20:26:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:50.511 20:26:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:50.511 rmmod nvme_tcp 00:31:50.511 rmmod nvme_fabrics 00:31:50.511 rmmod nvme_keyring 00:31:50.511 20:26:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:50.770 20:26:48 -- nvmf/common.sh@123 -- # set -e 00:31:50.770 20:26:48 -- nvmf/common.sh@124 -- # return 0 00:31:50.770 20:26:48 -- nvmf/common.sh@477 -- # '[' -n 1726156 ']' 00:31:50.770 20:26:48 -- nvmf/common.sh@478 -- # killprocess 1726156 00:31:50.770 20:26:48 -- common/autotest_common.sh@926 -- # '[' -z 1726156 ']' 00:31:50.770 20:26:48 -- common/autotest_common.sh@930 -- # kill -0 1726156 00:31:50.770 20:26:48 -- common/autotest_common.sh@931 -- # uname 00:31:50.770 20:26:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:50.770 20:26:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1726156 00:31:50.770 20:26:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:50.770 20:26:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:50.770 20:26:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1726156' 00:31:50.770 killing process with pid 1726156 00:31:50.770 20:26:48 -- common/autotest_common.sh@945 -- # kill 1726156 00:31:50.770 20:26:48 -- common/autotest_common.sh@950 -- # wait 1726156 00:31:51.337 20:26:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:51.337 20:26:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:51.337 20:26:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:51.337 20:26:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:51.337 20:26:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:51.337 20:26:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.337 20:26:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:51.337 20:26:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.238 20:26:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:53.238 00:31:53.238 real 0m25.368s 00:31:53.238 user 2m24.042s 00:31:53.238 sys 0m7.348s 00:31:53.238 20:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:53.238 20:26:51 -- common/autotest_common.sh@10 -- # set +x 00:31:53.238 ************************************ 00:31:53.238 END TEST nvmf_fio_host 00:31:53.238 ************************************ 00:31:53.238 20:26:51 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:53.238 20:26:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:53.238 20:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:53.238 20:26:51 -- common/autotest_common.sh@10 -- # set +x 00:31:53.238 ************************************ 00:31:53.238 START TEST nvmf_failover 00:31:53.238 ************************************ 00:31:53.238 20:26:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:53.499 * Looking for test storage... 00:31:53.499 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:31:53.499 20:26:51 -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.499 20:26:51 -- nvmf/common.sh@7 -- # uname -s 00:31:53.499 20:26:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.499 20:26:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.499 20:26:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.499 20:26:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.499 20:26:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.499 20:26:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.499 20:26:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.499 20:26:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.499 20:26:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.499 20:26:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.499 20:26:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:31:53.499 20:26:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:31:53.499 20:26:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.499 20:26:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.499 20:26:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:53.499 20:26:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:53.499 20:26:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.499 20:26:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.499 20:26:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.499 20:26:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.499 20:26:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.499 20:26:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.499 20:26:51 -- paths/export.sh@5 -- # export PATH 00:31:53.499 20:26:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.499 20:26:51 -- nvmf/common.sh@46 -- # : 0 00:31:53.499 20:26:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:53.499 20:26:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:53.499 20:26:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:53.499 20:26:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.499 20:26:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.499 20:26:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:53.499 20:26:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:53.499 20:26:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:53.499 20:26:51 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.499 20:26:51 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.499 20:26:51 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:31:53.499 20:26:51 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:53.499 20:26:51 -- host/failover.sh@18 -- # nvmftestinit 00:31:53.499 20:26:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:53.499 20:26:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.499 20:26:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:53.499 20:26:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:53.499 20:26:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:53.499 20:26:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.499 20:26:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:53.499 20:26:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.499 20:26:51 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:31:53.499 20:26:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:53.499 20:26:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:53.499 20:26:51 -- common/autotest_common.sh@10 -- # set +x 00:31:58.834 20:26:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:58.834 20:26:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:58.834 20:26:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:58.834 20:26:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:58.834 20:26:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:58.834 20:26:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:58.834 20:26:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:58.834 20:26:56 -- nvmf/common.sh@294 -- # net_devs=() 00:31:58.834 20:26:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:58.834 20:26:56 -- nvmf/common.sh@295 -- # e810=() 00:31:58.834 20:26:56 -- nvmf/common.sh@295 -- # local -ga e810 00:31:58.834 20:26:56 -- nvmf/common.sh@296 -- # x722=() 00:31:58.834 20:26:56 -- nvmf/common.sh@296 -- # local -ga x722 00:31:58.834 20:26:56 -- nvmf/common.sh@297 -- # mlx=() 00:31:58.834 20:26:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:58.834 20:26:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.834 20:26:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:58.834 20:26:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:58.834 20:26:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:58.834 20:26:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:31:58.834 Found 0000:27:00.0 (0x8086 - 0x159b) 00:31:58.834 20:26:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:58.834 20:26:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:31:58.834 Found 0000:27:00.1 (0x8086 - 0x159b) 00:31:58.834 20:26:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:58.834 20:26:56 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:31:58.834 20:26:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:58.834 20:26:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.834 20:26:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:58.834 20:26:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.834 20:26:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:31:58.834 Found net devices under 0000:27:00.0: cvl_0_0 00:31:58.834 20:26:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.834 20:26:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:58.834 20:26:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.834 20:26:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:58.835 20:26:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.835 20:26:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:31:58.835 Found net devices under 0000:27:00.1: cvl_0_1 00:31:58.835 20:26:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.835 20:26:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:58.835 20:26:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:58.835 20:26:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:58.835 20:26:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:58.835 20:26:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:58.835 20:26:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.835 20:26:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.835 20:26:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.835 20:26:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:58.835 20:26:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.835 20:26:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.835 20:26:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:58.835 20:26:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.835 20:26:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.835 20:26:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:58.835 20:26:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:58.835 20:26:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.835 20:26:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.835 20:26:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.835 20:26:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.835 20:26:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:58.835 20:26:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.835 20:26:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.835 20:26:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.835 20:26:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:58.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:31:58.835 00:31:58.835 --- 10.0.0.2 ping statistics --- 00:31:58.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.835 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:31:58.835 20:26:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:31:58.835 00:31:58.835 --- 10.0.0.1 ping statistics --- 00:31:58.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.835 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:31:58.835 20:26:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.835 20:26:56 -- nvmf/common.sh@410 -- # return 0 00:31:58.835 20:26:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:58.835 20:26:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.835 20:26:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:58.835 20:26:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:58.835 20:26:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.835 20:26:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:58.835 20:26:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:58.835 20:26:56 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:58.835 20:26:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:58.835 20:26:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:58.835 20:26:56 -- common/autotest_common.sh@10 -- # set +x 00:31:58.835 20:26:56 -- nvmf/common.sh@469 -- # nvmfpid=1734299 00:31:58.835 20:26:56 -- nvmf/common.sh@470 -- # waitforlisten 1734299 00:31:58.835 20:26:56 -- common/autotest_common.sh@819 -- # '[' -z 1734299 ']' 00:31:58.835 20:26:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.835 20:26:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:58.835 20:26:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.835 20:26:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:58.835 20:26:56 -- common/autotest_common.sh@10 -- # set +x 00:31:58.835 20:26:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:58.835 [2024-04-25 20:26:56.740289] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:58.835 [2024-04-25 20:26:56.740416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.093 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.093 [2024-04-25 20:26:56.877052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:59.093 [2024-04-25 20:26:56.975956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:59.093 [2024-04-25 20:26:56.976150] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.093 [2024-04-25 20:26:56.976164] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.093 [2024-04-25 20:26:56.976174] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.093 [2024-04-25 20:26:56.976329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.093 [2024-04-25 20:26:56.976361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.093 [2024-04-25 20:26:56.976370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.658 20:26:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:59.658 20:26:57 -- common/autotest_common.sh@852 -- # return 0 00:31:59.658 20:26:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:59.658 20:26:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:59.658 20:26:57 -- common/autotest_common.sh@10 -- # set +x 00:31:59.658 20:26:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.658 20:26:57 -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.658 [2024-04-25 20:26:57.581306] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.920 20:26:57 -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:59.920 Malloc0 00:31:59.920 20:26:57 -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:00.180 20:26:57 -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:00.180 20:26:58 -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.439 [2024-04-25 20:26:58.185024] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.439 20:26:58 -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:00.439 [2024-04-25 20:26:58.329105] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:00.439 20:26:58 -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:00.700 [2024-04-25 20:26:58.469254] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:00.700 20:26:58 -- host/failover.sh@31 -- # bdevperf_pid=1734694 00:32:00.700 20:26:58 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.700 20:26:58 -- host/failover.sh@34 -- # waitforlisten 1734694 /var/tmp/bdevperf.sock 00:32:00.700 20:26:58 -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:00.700 20:26:58 -- common/autotest_common.sh@819 -- # '[' -z 1734694 ']' 00:32:00.700 20:26:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.700 20:26:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:00.700 20:26:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.700 20:26:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:00.700 20:26:58 -- common/autotest_common.sh@10 -- # set +x 00:32:01.640 20:26:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:01.640 20:26:59 -- common/autotest_common.sh@852 -- # return 0 00:32:01.640 20:26:59 -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:01.640 NVMe0n1 00:32:01.640 20:26:59 -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:01.899 00:32:01.899 20:26:59 -- host/failover.sh@39 -- # run_test_pid=1734982 00:32:01.899 20:26:59 -- host/failover.sh@41 -- # sleep 1 00:32:01.899 20:26:59 -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:02.840 20:27:00 -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.099 [2024-04-25 20:27:00.886119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.099 [2024-04-25 20:27:00.886213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.099 [2024-04-25 20:27:00.886222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 [2024-04-25 20:27:00.886552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:03.100 20:27:00 -- host/failover.sh@45 -- # sleep 3 00:32:06.385 20:27:03 -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:06.385 00:32:06.385 20:27:04 -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:06.644 [2024-04-25 20:27:04.384235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.644 [2024-04-25 20:27:04.384402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 [2024-04-25 20:27:04.384730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:06.645 20:27:04 -- host/failover.sh@50 -- # sleep 3 00:32:09.939 20:27:07 -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.939 [2024-04-25 20:27:07.545839] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.939 20:27:07 -- host/failover.sh@55 -- # sleep 1 00:32:10.874 20:27:08 -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:10.874 [2024-04-25 20:27:08.711424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.874 [2024-04-25 20:27:08.711668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 [2024-04-25 20:27:08.711837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:10.875 20:27:08 -- host/failover.sh@59 -- # wait 1734982 00:32:17.461 0 00:32:17.461 20:27:14 -- host/failover.sh@61 -- # killprocess 1734694 00:32:17.461 20:27:14 -- common/autotest_common.sh@926 -- # '[' -z 1734694 ']' 00:32:17.461 20:27:14 -- common/autotest_common.sh@930 -- # kill -0 1734694 00:32:17.461 20:27:14 -- common/autotest_common.sh@931 -- # uname 00:32:17.461 20:27:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:17.461 20:27:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1734694 00:32:17.461 20:27:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:17.461 20:27:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:17.461 20:27:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1734694' 00:32:17.461 killing process with pid 1734694 00:32:17.461 20:27:14 -- common/autotest_common.sh@945 -- # kill 1734694 00:32:17.461 20:27:14 -- common/autotest_common.sh@950 -- # wait 1734694 00:32:17.461 20:27:15 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:17.461 [2024-04-25 20:26:58.573173] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:17.461 [2024-04-25 20:26:58.573327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734694 ] 00:32:17.461 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.461 [2024-04-25 20:26:58.701143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.461 [2024-04-25 20:26:58.792270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.461 Running I/O for 15 seconds... 00:32:17.461 [2024-04-25 20:27:00.887008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.461 [2024-04-25 20:27:00.887066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.461 [2024-04-25 20:27:00.887095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.461 [2024-04-25 20:27:00.887105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.461 [2024-04-25 20:27:00.887116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.461 [2024-04-25 20:27:00.887124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.461 [2024-04-25 20:27:00.887134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.461 [2024-04-25 20:27:00.887142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.461 [2024-04-25 20:27:00.887152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.461 [2024-04-25 20:27:00.887160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.461 [2024-04-25 20:27:00.887170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.461 [2024-04-25 20:27:00.887179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.461 [2024-04-25 20:27:00.887189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.461 [2024-04-25 20:27:00.887198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.461 [2024-04-25 20:27:00.887208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.461 [2024-04-25 20:27:00.887216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.461 [2024-04-25 20:27:00.887226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.462 [2024-04-25 20:27:00.887773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.462 [2024-04-25 20:27:00.887936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.462 [2024-04-25 20:27:00.887954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.462 [2024-04-25 20:27:00.887964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.462 [2024-04-25 20:27:00.887971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.887981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.887991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.463 [2024-04-25 20:27:00.888600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.463 [2024-04-25 20:27:00.888704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.463 [2024-04-25 20:27:00.888712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.888730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.888747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.888765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.888782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.888801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.464 [2024-04-25 20:27:00.888822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.888841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.888858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.888876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.464 [2024-04-25 20:27:00.888894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.464 [2024-04-25 20:27:00.888912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.464 [2024-04-25 20:27:00.888930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.464 [2024-04-25 20:27:00.888947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.888966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.464 [2024-04-25 20:27:00.888984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.888993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.464 [2024-04-25 20:27:00.889037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.464 [2024-04-25 20:27:00.889247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.464 [2024-04-25 20:27:00.889315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.464 [2024-04-25 20:27:00.889452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.464 [2024-04-25 20:27:00.889461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000042c0 is same with the state(5) to be set 00:32:17.464 [2024-04-25 20:27:00.889474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.465 [2024-04-25 20:27:00.889483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.465 [2024-04-25 20:27:00.889498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:8 PRP1 0x0 PRP2 0x0 00:32:17.465 [2024-04-25 20:27:00.889508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:00.889651] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6130000042c0 was disconnected and freed. reset controller. 00:32:17.465 [2024-04-25 20:27:00.889677] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:17.465 [2024-04-25 20:27:00.889710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.465 [2024-04-25 20:27:00.889722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:00.889735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.465 [2024-04-25 20:27:00.889743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:00.889754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.465 [2024-04-25 20:27:00.889763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:00.889771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.465 [2024-04-25 20:27:00.889782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:00.889794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.465 [2024-04-25 20:27:00.889842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:32:17.465 [2024-04-25 20:27:00.891597] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.465 [2024-04-25 20:27:00.920572] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:17.465 [2024-04-25 20:27:04.384834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.384887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.384910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.384925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.384936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.384944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.384954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.384962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.384972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.384980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.384990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.384998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.465 [2024-04-25 20:27:04.385470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.465 [2024-04-25 20:27:04.385478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.467 [2024-04-25 20:27:04.385502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.467 [2024-04-25 20:27:04.385575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.467 [2024-04-25 20:27:04.385612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.467 [2024-04-25 20:27:04.385796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.467 [2024-04-25 20:27:04.385887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.467 [2024-04-25 20:27:04.385896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.467 [2024-04-25 20:27:04.385903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.385913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.385921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.385931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.385938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.385948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.385956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.385966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.385973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.385983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.385991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.468 [2024-04-25 20:27:04.386586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.468 [2024-04-25 20:27:04.386634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.468 [2024-04-25 20:27:04.386644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.386833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.386870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.386904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.386921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.386937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.386973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.386982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.386990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.387008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.387027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.387044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.387061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.387078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.469 [2024-04-25 20:27:04.387095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.387113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.387130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.387147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.387164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.387182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.387208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.469 [2024-04-25 20:27:04.387227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000004640 is same with the state(5) to be set 00:32:17.469 [2024-04-25 20:27:04.387248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.469 [2024-04-25 20:27:04.387258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.469 [2024-04-25 20:27:04.387268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9848 len:8 PRP1 0x0 PRP2 0x0 00:32:17.469 [2024-04-25 20:27:04.387277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387411] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000004640 was disconnected and freed. reset controller. 00:32:17.469 [2024-04-25 20:27:04.387427] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:17.469 [2024-04-25 20:27:04.387456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.469 [2024-04-25 20:27:04.387466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.469 [2024-04-25 20:27:04.387486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.469 [2024-04-25 20:27:04.387509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.469 [2024-04-25 20:27:04.387526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.469 [2024-04-25 20:27:04.387535] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.469 [2024-04-25 20:27:04.387581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:32:17.469 [2024-04-25 20:27:04.389293] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.470 [2024-04-25 20:27:04.424401] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:17.470 [2024-04-25 20:27:08.711957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.470 [2024-04-25 20:27:08.712714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.470 [2024-04-25 20:27:08.712723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.471 [2024-04-25 20:27:08.712894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.471 [2024-04-25 20:27:08.712930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.471 [2024-04-25 20:27:08.712965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.712982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.712992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.471 [2024-04-25 20:27:08.713169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.471 [2024-04-25 20:27:08.713225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.471 [2024-04-25 20:27:08.713244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.471 [2024-04-25 20:27:08.713281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.471 [2024-04-25 20:27:08.713299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.471 [2024-04-25 20:27:08.713308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.713953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.472 [2024-04-25 20:27:08.713988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.713997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.472 [2024-04-25 20:27:08.714005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.472 [2024-04-25 20:27:08.714014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.473 [2024-04-25 20:27:08.714039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.473 [2024-04-25 20:27:08.714092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.473 [2024-04-25 20:27:08.714109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.473 [2024-04-25 20:27:08.714144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.473 [2024-04-25 20:27:08.714305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000004d40 is same with the state(5) to be set 00:32:17.473 [2024-04-25 20:27:08.714331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.473 [2024-04-25 20:27:08.714341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.473 [2024-04-25 20:27:08.714352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118968 len:8 PRP1 0x0 PRP2 0x0 00:32:17.473 [2024-04-25 20:27:08.714361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714500] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000004d40 was disconnected and freed. reset controller. 00:32:17.473 [2024-04-25 20:27:08.714515] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:17.473 [2024-04-25 20:27:08.714542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.473 [2024-04-25 20:27:08.714554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.473 [2024-04-25 20:27:08.714572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.473 [2024-04-25 20:27:08.714590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.473 [2024-04-25 20:27:08.714607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.473 [2024-04-25 20:27:08.714616] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.473 [2024-04-25 20:27:08.716415] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.473 [2024-04-25 20:27:08.716446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:32:17.473 [2024-04-25 20:27:08.867980] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:17.473 00:32:17.473 Latency(us) 00:32:17.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.473 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:17.473 Verification LBA range: start 0x0 length 0x4000 00:32:17.473 NVMe0n1 : 15.01 17585.41 68.69 1099.16 0.00 6838.23 551.88 12831.26 00:32:17.473 =================================================================================================================== 00:32:17.473 Total : 17585.41 68.69 1099.16 0.00 6838.23 551.88 12831.26 00:32:17.473 Received shutdown signal, test time was about 15.000000 seconds 00:32:17.473 00:32:17.473 Latency(us) 00:32:17.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.473 =================================================================================================================== 00:32:17.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:17.473 20:27:15 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:17.473 20:27:15 -- host/failover.sh@65 -- # count=3 00:32:17.473 20:27:15 -- host/failover.sh@67 -- # (( count != 3 )) 00:32:17.473 20:27:15 -- host/failover.sh@73 -- # bdevperf_pid=1738475 00:32:17.473 20:27:15 -- host/failover.sh@75 -- # waitforlisten 1738475 /var/tmp/bdevperf.sock 00:32:17.473 20:27:15 -- common/autotest_common.sh@819 -- # '[' -z 1738475 ']' 00:32:17.473 20:27:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.473 20:27:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:17.473 20:27:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.473 20:27:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:17.473 20:27:15 -- common/autotest_common.sh@10 -- # set +x 00:32:17.473 20:27:15 -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:18.416 20:27:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:18.416 20:27:16 -- common/autotest_common.sh@852 -- # return 0 00:32:18.416 20:27:16 -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:18.416 [2024-04-25 20:27:16.186470] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:18.416 20:27:16 -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:18.416 [2024-04-25 20:27:16.338505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:18.675 20:27:16 -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:18.933 NVMe0n1 00:32:18.933 20:27:16 -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:19.192 00:32:19.192 20:27:16 -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:19.451 00:32:19.451 20:27:17 -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:19.451 20:27:17 -- host/failover.sh@82 -- # grep -q NVMe0 00:32:19.709 20:27:17 -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:19.709 20:27:17 -- host/failover.sh@87 -- # sleep 3 00:32:22.999 20:27:20 -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:22.999 20:27:20 -- host/failover.sh@88 -- # grep -q NVMe0 00:32:22.999 20:27:20 -- host/failover.sh@90 -- # run_test_pid=1739462 00:32:22.999 20:27:20 -- host/failover.sh@92 -- # wait 1739462 00:32:22.999 20:27:20 -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:23.929 0 00:32:23.929 20:27:21 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:23.929 [2024-04-25 20:27:15.319903] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:23.929 [2024-04-25 20:27:15.320019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738475 ] 00:32:23.929 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.929 [2024-04-25 20:27:15.432547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.929 [2024-04-25 20:27:15.526380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.929 [2024-04-25 20:27:17.533319] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:23.929 [2024-04-25 20:27:17.533376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.929 [2024-04-25 20:27:17.533391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.929 [2024-04-25 20:27:17.533403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.929 [2024-04-25 20:27:17.533411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.929 [2024-04-25 20:27:17.533420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.929 [2024-04-25 20:27:17.533428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.929 [2024-04-25 20:27:17.533436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.929 [2024-04-25 20:27:17.533445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.929 [2024-04-25 20:27:17.533453] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.929 [2024-04-25 20:27:17.533502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.929 [2024-04-25 20:27:17.533524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:32:23.929 [2024-04-25 20:27:17.667915] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:23.929 Running I/O for 1 seconds... 00:32:23.929 00:32:23.930 Latency(us) 00:32:23.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.930 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:23.930 Verification LBA range: start 0x0 length 0x4000 00:32:23.930 NVMe0n1 : 1.00 17258.07 67.41 0.00 0.00 7388.40 1000.29 13038.21 00:32:23.930 =================================================================================================================== 00:32:23.930 Total : 17258.07 67.41 0.00 0.00 7388.40 1000.29 13038.21 00:32:23.930 20:27:21 -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:23.930 20:27:21 -- host/failover.sh@95 -- # grep -q NVMe0 00:32:24.187 20:27:21 -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.187 20:27:22 -- host/failover.sh@99 -- # grep -q NVMe0 00:32:24.187 20:27:22 -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:24.447 20:27:22 -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.447 20:27:22 -- host/failover.sh@101 -- # sleep 3 00:32:27.815 20:27:25 -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:27.815 20:27:25 -- host/failover.sh@103 -- # grep -q NVMe0 00:32:27.815 20:27:25 -- host/failover.sh@108 -- # killprocess 1738475 00:32:27.815 20:27:25 -- common/autotest_common.sh@926 -- # '[' -z 1738475 ']' 00:32:27.815 20:27:25 -- common/autotest_common.sh@930 -- # kill -0 1738475 00:32:27.815 20:27:25 -- common/autotest_common.sh@931 -- # uname 00:32:27.815 20:27:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:27.815 20:27:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1738475 00:32:27.815 20:27:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:27.815 20:27:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:27.815 20:27:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1738475' 00:32:27.815 killing process with pid 1738475 00:32:27.815 20:27:25 -- common/autotest_common.sh@945 -- # kill 1738475 00:32:27.816 20:27:25 -- common/autotest_common.sh@950 -- # wait 1738475 00:32:28.072 20:27:25 -- host/failover.sh@110 -- # sync 00:32:28.072 20:27:25 -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:28.330 20:27:26 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:28.330 20:27:26 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:28.330 20:27:26 -- host/failover.sh@116 -- # nvmftestfini 00:32:28.330 20:27:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:28.330 20:27:26 -- nvmf/common.sh@116 -- # sync 00:32:28.330 20:27:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:28.330 20:27:26 -- nvmf/common.sh@119 -- # set +e 00:32:28.330 20:27:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:28.330 20:27:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:28.330 rmmod nvme_tcp 00:32:28.330 rmmod nvme_fabrics 00:32:28.330 rmmod nvme_keyring 00:32:28.330 20:27:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:28.330 20:27:26 -- nvmf/common.sh@123 -- # set -e 00:32:28.330 20:27:26 -- nvmf/common.sh@124 -- # return 0 00:32:28.330 20:27:26 -- nvmf/common.sh@477 -- # '[' -n 1734299 ']' 00:32:28.330 20:27:26 -- nvmf/common.sh@478 -- # killprocess 1734299 00:32:28.330 20:27:26 -- common/autotest_common.sh@926 -- # '[' -z 1734299 ']' 00:32:28.330 20:27:26 -- common/autotest_common.sh@930 -- # kill -0 1734299 00:32:28.330 20:27:26 -- common/autotest_common.sh@931 -- # uname 00:32:28.330 20:27:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:28.330 20:27:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1734299 00:32:28.330 20:27:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:28.330 20:27:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:28.330 20:27:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1734299' 00:32:28.330 killing process with pid 1734299 00:32:28.330 20:27:26 -- common/autotest_common.sh@945 -- # kill 1734299 00:32:28.330 20:27:26 -- common/autotest_common.sh@950 -- # wait 1734299 00:32:28.898 20:27:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:28.898 20:27:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:28.898 20:27:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:28.898 20:27:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:28.898 20:27:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:28.898 20:27:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.898 20:27:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:28.898 20:27:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.436 20:27:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:31.436 00:32:31.437 real 0m37.674s 00:32:31.437 user 1m59.775s 00:32:31.437 sys 0m6.845s 00:32:31.437 20:27:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:31.437 20:27:28 -- common/autotest_common.sh@10 -- # set +x 00:32:31.437 ************************************ 00:32:31.437 END TEST nvmf_failover 00:32:31.437 ************************************ 00:32:31.437 20:27:28 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:31.437 20:27:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:31.437 20:27:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:31.437 20:27:28 -- common/autotest_common.sh@10 -- # set +x 00:32:31.437 ************************************ 00:32:31.437 START TEST nvmf_discovery 00:32:31.437 ************************************ 00:32:31.437 20:27:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:31.437 * Looking for test storage... 00:32:31.437 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:32:31.437 20:27:28 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.437 20:27:28 -- nvmf/common.sh@7 -- # uname -s 00:32:31.437 20:27:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.437 20:27:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.437 20:27:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.437 20:27:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.437 20:27:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.437 20:27:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.437 20:27:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.437 20:27:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.437 20:27:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.437 20:27:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.437 20:27:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:31.437 20:27:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:31.437 20:27:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.437 20:27:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.437 20:27:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:31.437 20:27:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:31.437 20:27:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.437 20:27:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.437 20:27:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.437 20:27:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.437 20:27:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.437 20:27:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.437 20:27:28 -- paths/export.sh@5 -- # export PATH 00:32:31.437 20:27:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.437 20:27:28 -- nvmf/common.sh@46 -- # : 0 00:32:31.437 20:27:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:31.437 20:27:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:31.437 20:27:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:31.437 20:27:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.437 20:27:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.437 20:27:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:31.437 20:27:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:31.437 20:27:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:31.437 20:27:28 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:31.437 20:27:28 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:31.437 20:27:28 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:31.437 20:27:28 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:31.437 20:27:28 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:31.437 20:27:28 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:31.437 20:27:28 -- host/discovery.sh@25 -- # nvmftestinit 00:32:31.437 20:27:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:31.437 20:27:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.437 20:27:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:31.437 20:27:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:31.437 20:27:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:31.437 20:27:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.437 20:27:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:31.437 20:27:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.437 20:27:28 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:32:31.437 20:27:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:31.437 20:27:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:31.437 20:27:28 -- common/autotest_common.sh@10 -- # set +x 00:32:36.710 20:27:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:36.710 20:27:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:36.710 20:27:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:36.710 20:27:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:36.710 20:27:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:36.710 20:27:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:36.710 20:27:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:36.710 20:27:33 -- nvmf/common.sh@294 -- # net_devs=() 00:32:36.710 20:27:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:36.710 20:27:33 -- nvmf/common.sh@295 -- # e810=() 00:32:36.710 20:27:33 -- nvmf/common.sh@295 -- # local -ga e810 00:32:36.710 20:27:33 -- nvmf/common.sh@296 -- # x722=() 00:32:36.710 20:27:33 -- nvmf/common.sh@296 -- # local -ga x722 00:32:36.710 20:27:33 -- nvmf/common.sh@297 -- # mlx=() 00:32:36.710 20:27:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:36.710 20:27:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.710 20:27:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:36.710 20:27:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:36.710 20:27:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:36.710 20:27:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:32:36.710 Found 0000:27:00.0 (0x8086 - 0x159b) 00:32:36.710 20:27:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:36.710 20:27:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:32:36.710 Found 0000:27:00.1 (0x8086 - 0x159b) 00:32:36.710 20:27:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:36.710 20:27:33 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:36.710 20:27:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.710 20:27:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:36.710 20:27:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.710 20:27:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:32:36.710 Found net devices under 0000:27:00.0: cvl_0_0 00:32:36.710 20:27:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.710 20:27:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:36.710 20:27:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.710 20:27:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:36.710 20:27:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.710 20:27:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:32:36.710 Found net devices under 0000:27:00.1: cvl_0_1 00:32:36.710 20:27:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.710 20:27:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:36.710 20:27:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:36.710 20:27:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:36.710 20:27:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:36.710 20:27:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.710 20:27:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.711 20:27:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.711 20:27:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:36.711 20:27:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.711 20:27:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.711 20:27:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:36.711 20:27:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.711 20:27:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.711 20:27:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:36.711 20:27:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:36.711 20:27:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.711 20:27:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.711 20:27:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.711 20:27:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.711 20:27:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:36.711 20:27:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.711 20:27:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.711 20:27:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.711 20:27:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:36.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:32:36.711 00:32:36.711 --- 10.0.0.2 ping statistics --- 00:32:36.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.711 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:32:36.711 20:27:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:32:36.711 00:32:36.711 --- 10.0.0.1 ping statistics --- 00:32:36.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.711 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:32:36.711 20:27:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.711 20:27:34 -- nvmf/common.sh@410 -- # return 0 00:32:36.711 20:27:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:36.711 20:27:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.711 20:27:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:36.711 20:27:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:36.711 20:27:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.711 20:27:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:36.711 20:27:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:36.711 20:27:34 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:36.711 20:27:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:36.711 20:27:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:36.711 20:27:34 -- common/autotest_common.sh@10 -- # set +x 00:32:36.711 20:27:34 -- nvmf/common.sh@469 -- # nvmfpid=1744563 00:32:36.711 20:27:34 -- nvmf/common.sh@470 -- # waitforlisten 1744563 00:32:36.711 20:27:34 -- common/autotest_common.sh@819 -- # '[' -z 1744563 ']' 00:32:36.711 20:27:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.711 20:27:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:36.711 20:27:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.711 20:27:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:36.711 20:27:34 -- common/autotest_common.sh@10 -- # set +x 00:32:36.711 20:27:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:36.711 [2024-04-25 20:27:34.331244] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:36.711 [2024-04-25 20:27:34.331349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.711 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.711 [2024-04-25 20:27:34.452064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.711 [2024-04-25 20:27:34.546871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:36.711 [2024-04-25 20:27:34.547036] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.711 [2024-04-25 20:27:34.547049] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.711 [2024-04-25 20:27:34.547057] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.711 [2024-04-25 20:27:34.547081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.277 20:27:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:37.277 20:27:35 -- common/autotest_common.sh@852 -- # return 0 00:32:37.277 20:27:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:37.277 20:27:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:37.277 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:37.277 20:27:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.277 20:27:35 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:37.277 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.277 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:37.277 [2024-04-25 20:27:35.049528] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.277 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.277 20:27:35 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:37.277 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.277 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:37.277 [2024-04-25 20:27:35.057683] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:37.277 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.277 20:27:35 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:37.277 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.277 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:37.277 null0 00:32:37.277 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.277 20:27:35 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:37.277 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.277 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:37.277 null1 00:32:37.277 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.277 20:27:35 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:37.277 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.277 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:37.277 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.277 20:27:35 -- host/discovery.sh@45 -- # hostpid=1744590 00:32:37.277 20:27:35 -- host/discovery.sh@46 -- # waitforlisten 1744590 /tmp/host.sock 00:32:37.277 20:27:35 -- common/autotest_common.sh@819 -- # '[' -z 1744590 ']' 00:32:37.277 20:27:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:32:37.277 20:27:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:37.277 20:27:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:37.277 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:37.277 20:27:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:37.277 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:37.277 20:27:35 -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:37.277 [2024-04-25 20:27:35.157853] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:37.277 [2024-04-25 20:27:35.157961] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744590 ] 00:32:37.537 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.537 [2024-04-25 20:27:35.269648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.537 [2024-04-25 20:27:35.364916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:37.537 [2024-04-25 20:27:35.365104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.108 20:27:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:38.108 20:27:35 -- common/autotest_common.sh@852 -- # return 0 00:32:38.108 20:27:35 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:38.108 20:27:35 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:38.108 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.108 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:38.108 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.108 20:27:35 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:38.108 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.108 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:38.108 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.108 20:27:35 -- host/discovery.sh@72 -- # notify_id=0 00:32:38.108 20:27:35 -- host/discovery.sh@78 -- # get_subsystem_names 00:32:38.108 20:27:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.108 20:27:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.108 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.108 20:27:35 -- host/discovery.sh@59 -- # sort 00:32:38.108 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:38.108 20:27:35 -- host/discovery.sh@59 -- # xargs 00:32:38.108 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.108 20:27:35 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:32:38.108 20:27:35 -- host/discovery.sh@79 -- # get_bdev_list 00:32:38.108 20:27:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.108 20:27:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.109 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.109 20:27:35 -- host/discovery.sh@55 -- # sort 00:32:38.109 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:38.109 20:27:35 -- host/discovery.sh@55 -- # xargs 00:32:38.109 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.109 20:27:35 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:32:38.109 20:27:35 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:38.109 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.109 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:38.109 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.109 20:27:35 -- host/discovery.sh@82 -- # get_subsystem_names 00:32:38.109 20:27:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.109 20:27:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.109 20:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.109 20:27:35 -- host/discovery.sh@59 -- # sort 00:32:38.109 20:27:35 -- common/autotest_common.sh@10 -- # set +x 00:32:38.109 20:27:35 -- host/discovery.sh@59 -- # xargs 00:32:38.109 20:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.109 20:27:36 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:32:38.109 20:27:36 -- host/discovery.sh@83 -- # get_bdev_list 00:32:38.109 20:27:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.109 20:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.109 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:38.109 20:27:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.109 20:27:36 -- host/discovery.sh@55 -- # sort 00:32:38.109 20:27:36 -- host/discovery.sh@55 -- # xargs 00:32:38.109 20:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:38.368 20:27:36 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:38.368 20:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.368 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:38.368 20:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@86 -- # get_subsystem_names 00:32:38.368 20:27:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.368 20:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.368 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:38.368 20:27:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.368 20:27:36 -- host/discovery.sh@59 -- # sort 00:32:38.368 20:27:36 -- host/discovery.sh@59 -- # xargs 00:32:38.368 20:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:32:38.368 20:27:36 -- host/discovery.sh@87 -- # get_bdev_list 00:32:38.368 20:27:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.368 20:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.368 20:27:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.368 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:38.368 20:27:36 -- host/discovery.sh@55 -- # sort 00:32:38.368 20:27:36 -- host/discovery.sh@55 -- # xargs 00:32:38.368 20:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:38.368 20:27:36 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:38.368 20:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.368 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:38.368 [2024-04-25 20:27:36.153954] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.368 20:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@92 -- # get_subsystem_names 00:32:38.368 20:27:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.368 20:27:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.368 20:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.368 20:27:36 -- host/discovery.sh@59 -- # sort 00:32:38.368 20:27:36 -- host/discovery.sh@59 -- # xargs 00:32:38.368 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:38.368 20:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:38.368 20:27:36 -- host/discovery.sh@93 -- # get_bdev_list 00:32:38.368 20:27:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.368 20:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.368 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:38.368 20:27:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.368 20:27:36 -- host/discovery.sh@55 -- # sort 00:32:38.368 20:27:36 -- host/discovery.sh@55 -- # xargs 00:32:38.368 20:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:32:38.368 20:27:36 -- host/discovery.sh@94 -- # get_notification_count 00:32:38.368 20:27:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:38.368 20:27:36 -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.368 20:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.368 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:38.368 20:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@74 -- # notification_count=0 00:32:38.368 20:27:36 -- host/discovery.sh@75 -- # notify_id=0 00:32:38.368 20:27:36 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:38.368 20:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.368 20:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:38.368 20:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.368 20:27:36 -- host/discovery.sh@100 -- # sleep 1 00:32:39.303 [2024-04-25 20:27:36.934628] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:39.303 [2024-04-25 20:27:36.934662] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:39.303 [2024-04-25 20:27:36.934679] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:39.303 [2024-04-25 20:27:37.023733] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:39.303 [2024-04-25 20:27:37.208056] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:39.303 [2024-04-25 20:27:37.208084] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:39.562 20:27:37 -- host/discovery.sh@101 -- # get_subsystem_names 00:32:39.562 20:27:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.562 20:27:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.562 20:27:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.562 20:27:37 -- host/discovery.sh@59 -- # sort 00:32:39.562 20:27:37 -- host/discovery.sh@59 -- # xargs 00:32:39.562 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:32:39.562 20:27:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.562 20:27:37 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.562 20:27:37 -- host/discovery.sh@102 -- # get_bdev_list 00:32:39.562 20:27:37 -- host/discovery.sh@55 -- # xargs 00:32:39.562 20:27:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.562 20:27:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.562 20:27:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.562 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:32:39.562 20:27:37 -- host/discovery.sh@55 -- # sort 00:32:39.562 20:27:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.562 20:27:37 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:39.562 20:27:37 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:32:39.562 20:27:37 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:39.562 20:27:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.562 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:32:39.562 20:27:37 -- host/discovery.sh@63 -- # xargs 00:32:39.562 20:27:37 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:39.562 20:27:37 -- host/discovery.sh@63 -- # sort -n 00:32:39.562 20:27:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.562 20:27:37 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:32:39.562 20:27:37 -- host/discovery.sh@104 -- # get_notification_count 00:32:39.562 20:27:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:39.562 20:27:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.562 20:27:37 -- host/discovery.sh@74 -- # jq '. | length' 00:32:39.562 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:32:39.562 20:27:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.562 20:27:37 -- host/discovery.sh@74 -- # notification_count=1 00:32:39.562 20:27:37 -- host/discovery.sh@75 -- # notify_id=1 00:32:39.562 20:27:37 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:32:39.562 20:27:37 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:39.562 20:27:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.562 20:27:37 -- common/autotest_common.sh@10 -- # set +x 00:32:39.562 20:27:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.562 20:27:37 -- host/discovery.sh@109 -- # sleep 1 00:32:40.940 20:27:38 -- host/discovery.sh@110 -- # get_bdev_list 00:32:40.940 20:27:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.940 20:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.940 20:27:38 -- common/autotest_common.sh@10 -- # set +x 00:32:40.940 20:27:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.940 20:27:38 -- host/discovery.sh@55 -- # sort 00:32:40.940 20:27:38 -- host/discovery.sh@55 -- # xargs 00:32:40.940 20:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.940 20:27:38 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:40.940 20:27:38 -- host/discovery.sh@111 -- # get_notification_count 00:32:40.940 20:27:38 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:40.940 20:27:38 -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.940 20:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.940 20:27:38 -- common/autotest_common.sh@10 -- # set +x 00:32:40.940 20:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.940 20:27:38 -- host/discovery.sh@74 -- # notification_count=1 00:32:40.940 20:27:38 -- host/discovery.sh@75 -- # notify_id=2 00:32:40.940 20:27:38 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:32:40.940 20:27:38 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:40.940 20:27:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.940 20:27:38 -- common/autotest_common.sh@10 -- # set +x 00:32:40.940 [2024-04-25 20:27:38.547382] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:40.940 [2024-04-25 20:27:38.547790] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:40.940 [2024-04-25 20:27:38.547824] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:40.940 20:27:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.940 20:27:38 -- host/discovery.sh@117 -- # sleep 1 00:32:40.940 [2024-04-25 20:27:38.637857] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:40.940 [2024-04-25 20:27:38.695494] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:40.940 [2024-04-25 20:27:38.695519] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:40.940 [2024-04-25 20:27:38.695529] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:41.875 20:27:39 -- host/discovery.sh@118 -- # get_subsystem_names 00:32:41.876 20:27:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:41.876 20:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:41.876 20:27:39 -- common/autotest_common.sh@10 -- # set +x 00:32:41.876 20:27:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:41.876 20:27:39 -- host/discovery.sh@59 -- # sort 00:32:41.876 20:27:39 -- host/discovery.sh@59 -- # xargs 00:32:41.876 20:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:41.876 20:27:39 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.876 20:27:39 -- host/discovery.sh@119 -- # get_bdev_list 00:32:41.876 20:27:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.876 20:27:39 -- host/discovery.sh@55 -- # xargs 00:32:41.876 20:27:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:41.876 20:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:41.876 20:27:39 -- host/discovery.sh@55 -- # sort 00:32:41.876 20:27:39 -- common/autotest_common.sh@10 -- # set +x 00:32:41.876 20:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:41.876 20:27:39 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:41.876 20:27:39 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:32:41.876 20:27:39 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:41.876 20:27:39 -- host/discovery.sh@63 -- # xargs 00:32:41.876 20:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:41.876 20:27:39 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:41.876 20:27:39 -- common/autotest_common.sh@10 -- # set +x 00:32:41.876 20:27:39 -- host/discovery.sh@63 -- # sort -n 00:32:41.876 20:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:41.876 20:27:39 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:41.876 20:27:39 -- host/discovery.sh@121 -- # get_notification_count 00:32:41.876 20:27:39 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:41.876 20:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:41.876 20:27:39 -- common/autotest_common.sh@10 -- # set +x 00:32:41.876 20:27:39 -- host/discovery.sh@74 -- # jq '. | length' 00:32:41.876 20:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:41.876 20:27:39 -- host/discovery.sh@74 -- # notification_count=0 00:32:41.876 20:27:39 -- host/discovery.sh@75 -- # notify_id=2 00:32:41.876 20:27:39 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:32:41.876 20:27:39 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:41.876 20:27:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:41.876 20:27:39 -- common/autotest_common.sh@10 -- # set +x 00:32:41.876 [2024-04-25 20:27:39.721000] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:41.876 [2024-04-25 20:27:39.721034] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:41.876 20:27:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:41.876 20:27:39 -- host/discovery.sh@127 -- # sleep 1 00:32:41.876 [2024-04-25 20:27:39.725703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.876 [2024-04-25 20:27:39.725728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.876 [2024-04-25 20:27:39.725740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.876 [2024-04-25 20:27:39.725748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.876 [2024-04-25 20:27:39.725757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.876 [2024-04-25 20:27:39.725765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.876 [2024-04-25 20:27:39.725773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:41.876 [2024-04-25 20:27:39.725781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.876 [2024-04-25 20:27:39.725789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:41.876 [2024-04-25 20:27:39.735689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:41.876 [2024-04-25 20:27:39.745701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:41.876 [2024-04-25 20:27:39.746155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.746427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.746439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:41.876 [2024-04-25 20:27:39.746448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:41.876 [2024-04-25 20:27:39.746462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:41.876 [2024-04-25 20:27:39.746479] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:41.876 [2024-04-25 20:27:39.746488] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:41.876 [2024-04-25 20:27:39.746503] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:41.876 [2024-04-25 20:27:39.746519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.876 [2024-04-25 20:27:39.755746] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:41.876 [2024-04-25 20:27:39.755967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.756264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.756274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:41.876 [2024-04-25 20:27:39.756283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:41.876 [2024-04-25 20:27:39.756299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:41.876 [2024-04-25 20:27:39.756314] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:41.876 [2024-04-25 20:27:39.756321] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:41.876 [2024-04-25 20:27:39.756329] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:41.876 [2024-04-25 20:27:39.756343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.876 [2024-04-25 20:27:39.765783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:41.876 [2024-04-25 20:27:39.766107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.766301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.766310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:41.876 [2024-04-25 20:27:39.766319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:41.876 [2024-04-25 20:27:39.766332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:41.876 [2024-04-25 20:27:39.766343] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:41.876 [2024-04-25 20:27:39.766350] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:41.876 [2024-04-25 20:27:39.766358] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:41.876 [2024-04-25 20:27:39.766370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.876 [2024-04-25 20:27:39.775822] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:41.876 [2024-04-25 20:27:39.776214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.776544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.776555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:41.876 [2024-04-25 20:27:39.776563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:41.876 [2024-04-25 20:27:39.776576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:41.876 [2024-04-25 20:27:39.776595] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:41.876 [2024-04-25 20:27:39.776603] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:41.876 [2024-04-25 20:27:39.776611] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:41.876 [2024-04-25 20:27:39.776623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.876 [2024-04-25 20:27:39.785863] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:41.876 [2024-04-25 20:27:39.786244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.786604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.786615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:41.876 [2024-04-25 20:27:39.786623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:41.876 [2024-04-25 20:27:39.786634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:41.876 [2024-04-25 20:27:39.786653] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:41.876 [2024-04-25 20:27:39.786660] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:41.876 [2024-04-25 20:27:39.786668] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:41.876 [2024-04-25 20:27:39.786679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.876 [2024-04-25 20:27:39.795899] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:41.876 [2024-04-25 20:27:39.796320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.796699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.876 [2024-04-25 20:27:39.796710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:41.876 [2024-04-25 20:27:39.796718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:41.876 [2024-04-25 20:27:39.796730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:41.877 [2024-04-25 20:27:39.796749] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:41.877 [2024-04-25 20:27:39.796756] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:41.877 [2024-04-25 20:27:39.796764] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:41.877 [2024-04-25 20:27:39.796774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:41.877 [2024-04-25 20:27:39.805932] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:41.877 [2024-04-25 20:27:39.806236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.877 [2024-04-25 20:27:39.806543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.877 [2024-04-25 20:27:39.806554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:41.877 [2024-04-25 20:27:39.806563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:41.877 [2024-04-25 20:27:39.806575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:41.877 [2024-04-25 20:27:39.806585] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:41.877 [2024-04-25 20:27:39.806592] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:41.877 [2024-04-25 20:27:39.806599] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:41.877 [2024-04-25 20:27:39.806610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:42.138 [2024-04-25 20:27:39.809779] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:42.138 [2024-04-25 20:27:39.809804] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:43.080 20:27:40 -- host/discovery.sh@128 -- # get_subsystem_names 00:32:43.080 20:27:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:43.080 20:27:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.080 20:27:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:43.080 20:27:40 -- host/discovery.sh@59 -- # sort 00:32:43.080 20:27:40 -- common/autotest_common.sh@10 -- # set +x 00:32:43.080 20:27:40 -- host/discovery.sh@59 -- # xargs 00:32:43.080 20:27:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.080 20:27:40 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.080 20:27:40 -- host/discovery.sh@129 -- # get_bdev_list 00:32:43.080 20:27:40 -- host/discovery.sh@55 -- # xargs 00:32:43.080 20:27:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:43.080 20:27:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:43.080 20:27:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.080 20:27:40 -- host/discovery.sh@55 -- # sort 00:32:43.080 20:27:40 -- common/autotest_common.sh@10 -- # set +x 00:32:43.080 20:27:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.080 20:27:40 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:43.080 20:27:40 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:32:43.080 20:27:40 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:43.080 20:27:40 -- host/discovery.sh@63 -- # xargs 00:32:43.080 20:27:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.080 20:27:40 -- common/autotest_common.sh@10 -- # set +x 00:32:43.080 20:27:40 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:43.080 20:27:40 -- host/discovery.sh@63 -- # sort -n 00:32:43.080 20:27:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.080 20:27:40 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:32:43.080 20:27:40 -- host/discovery.sh@131 -- # get_notification_count 00:32:43.080 20:27:40 -- host/discovery.sh@74 -- # jq '. | length' 00:32:43.080 20:27:40 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:43.080 20:27:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.080 20:27:40 -- common/autotest_common.sh@10 -- # set +x 00:32:43.080 20:27:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.080 20:27:40 -- host/discovery.sh@74 -- # notification_count=0 00:32:43.080 20:27:40 -- host/discovery.sh@75 -- # notify_id=2 00:32:43.080 20:27:40 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:32:43.080 20:27:40 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:43.080 20:27:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.080 20:27:40 -- common/autotest_common.sh@10 -- # set +x 00:32:43.080 20:27:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.080 20:27:40 -- host/discovery.sh@135 -- # sleep 1 00:32:44.017 20:27:41 -- host/discovery.sh@136 -- # get_subsystem_names 00:32:44.017 20:27:41 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.017 20:27:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.017 20:27:41 -- common/autotest_common.sh@10 -- # set +x 00:32:44.017 20:27:41 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.017 20:27:41 -- host/discovery.sh@59 -- # sort 00:32:44.017 20:27:41 -- host/discovery.sh@59 -- # xargs 00:32:44.017 20:27:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.017 20:27:41 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:32:44.017 20:27:41 -- host/discovery.sh@137 -- # get_bdev_list 00:32:44.017 20:27:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.017 20:27:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.017 20:27:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.017 20:27:41 -- host/discovery.sh@55 -- # sort 00:32:44.017 20:27:41 -- common/autotest_common.sh@10 -- # set +x 00:32:44.017 20:27:41 -- host/discovery.sh@55 -- # xargs 00:32:44.017 20:27:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.278 20:27:41 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:32:44.278 20:27:41 -- host/discovery.sh@138 -- # get_notification_count 00:32:44.278 20:27:41 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:44.278 20:27:41 -- host/discovery.sh@74 -- # jq '. | length' 00:32:44.278 20:27:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.278 20:27:41 -- common/autotest_common.sh@10 -- # set +x 00:32:44.278 20:27:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.278 20:27:42 -- host/discovery.sh@74 -- # notification_count=2 00:32:44.278 20:27:42 -- host/discovery.sh@75 -- # notify_id=4 00:32:44.278 20:27:42 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:32:44.278 20:27:42 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:44.278 20:27:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.278 20:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:45.218 [2024-04-25 20:27:43.066637] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:45.218 [2024-04-25 20:27:43.066670] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:45.218 [2024-04-25 20:27:43.066690] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:45.475 [2024-04-25 20:27:43.198790] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:45.475 [2024-04-25 20:27:43.298723] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:45.475 [2024-04-25 20:27:43.298762] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:45.475 20:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:45.475 20:27:43 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:45.475 20:27:43 -- common/autotest_common.sh@640 -- # local es=0 00:32:45.475 20:27:43 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:45.475 20:27:43 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:32:45.475 20:27:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:45.475 20:27:43 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:32:45.475 20:27:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:45.475 20:27:43 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:45.475 20:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:45.475 20:27:43 -- common/autotest_common.sh@10 -- # set +x 00:32:45.475 request: 00:32:45.475 { 00:32:45.475 "name": "nvme", 00:32:45.475 "trtype": "tcp", 00:32:45.475 "traddr": "10.0.0.2", 00:32:45.475 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:45.475 "adrfam": "ipv4", 00:32:45.475 "trsvcid": "8009", 00:32:45.475 "wait_for_attach": true, 00:32:45.475 "method": "bdev_nvme_start_discovery", 00:32:45.475 "req_id": 1 00:32:45.475 } 00:32:45.475 Got JSON-RPC error response 00:32:45.475 response: 00:32:45.475 { 00:32:45.475 "code": -17, 00:32:45.475 "message": "File exists" 00:32:45.475 } 00:32:45.475 20:27:43 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:32:45.475 20:27:43 -- common/autotest_common.sh@643 -- # es=1 00:32:45.475 20:27:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:45.475 20:27:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:45.475 20:27:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:45.475 20:27:43 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:32:45.475 20:27:43 -- host/discovery.sh@67 -- # sort 00:32:45.475 20:27:43 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:45.475 20:27:43 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:45.475 20:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:45.475 20:27:43 -- common/autotest_common.sh@10 -- # set +x 00:32:45.475 20:27:43 -- host/discovery.sh@67 -- # xargs 00:32:45.475 20:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:45.475 20:27:43 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:32:45.475 20:27:43 -- host/discovery.sh@147 -- # get_bdev_list 00:32:45.475 20:27:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.475 20:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:45.475 20:27:43 -- common/autotest_common.sh@10 -- # set +x 00:32:45.475 20:27:43 -- host/discovery.sh@55 -- # sort 00:32:45.475 20:27:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:45.475 20:27:43 -- host/discovery.sh@55 -- # xargs 00:32:45.475 20:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:45.475 20:27:43 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:45.475 20:27:43 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:45.475 20:27:43 -- common/autotest_common.sh@640 -- # local es=0 00:32:45.475 20:27:43 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:45.475 20:27:43 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:32:45.475 20:27:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:45.475 20:27:43 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:32:45.475 20:27:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:45.476 20:27:43 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:45.476 20:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:45.476 20:27:43 -- common/autotest_common.sh@10 -- # set +x 00:32:45.476 request: 00:32:45.476 { 00:32:45.476 "name": "nvme_second", 00:32:45.476 "trtype": "tcp", 00:32:45.476 "traddr": "10.0.0.2", 00:32:45.476 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:45.476 "adrfam": "ipv4", 00:32:45.476 "trsvcid": "8009", 00:32:45.476 "wait_for_attach": true, 00:32:45.476 "method": "bdev_nvme_start_discovery", 00:32:45.476 "req_id": 1 00:32:45.476 } 00:32:45.476 Got JSON-RPC error response 00:32:45.476 response: 00:32:45.476 { 00:32:45.476 "code": -17, 00:32:45.476 "message": "File exists" 00:32:45.476 } 00:32:45.476 20:27:43 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:32:45.476 20:27:43 -- common/autotest_common.sh@643 -- # es=1 00:32:45.476 20:27:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:45.476 20:27:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:45.476 20:27:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:45.476 20:27:43 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:32:45.476 20:27:43 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:45.733 20:27:43 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:45.733 20:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:45.733 20:27:43 -- common/autotest_common.sh@10 -- # set +x 00:32:45.733 20:27:43 -- host/discovery.sh@67 -- # sort 00:32:45.733 20:27:43 -- host/discovery.sh@67 -- # xargs 00:32:45.733 20:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:45.733 20:27:43 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:32:45.733 20:27:43 -- host/discovery.sh@153 -- # get_bdev_list 00:32:45.733 20:27:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.733 20:27:43 -- host/discovery.sh@55 -- # xargs 00:32:45.733 20:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:45.733 20:27:43 -- common/autotest_common.sh@10 -- # set +x 00:32:45.733 20:27:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:45.733 20:27:43 -- host/discovery.sh@55 -- # sort 00:32:45.733 20:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:45.733 20:27:43 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:45.733 20:27:43 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:45.733 20:27:43 -- common/autotest_common.sh@640 -- # local es=0 00:32:45.733 20:27:43 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:45.733 20:27:43 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:32:45.733 20:27:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:45.733 20:27:43 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:32:45.733 20:27:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:45.733 20:27:43 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:45.733 20:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:45.733 20:27:43 -- common/autotest_common.sh@10 -- # set +x 00:32:46.672 [2024-04-25 20:27:44.491301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-04-25 20:27:44.491692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-04-25 20:27:44.491709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000006240 with addr=10.0.0.2, port=8010 00:32:46.672 [2024-04-25 20:27:44.491737] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:46.672 [2024-04-25 20:27:44.491748] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:46.672 [2024-04-25 20:27:44.491760] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:47.639 [2024-04-25 20:27:45.491347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.639 [2024-04-25 20:27:45.491750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.639 [2024-04-25 20:27:45.491762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000006400 with addr=10.0.0.2, port=8010 00:32:47.639 [2024-04-25 20:27:45.491791] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:47.639 [2024-04-25 20:27:45.491799] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:47.639 [2024-04-25 20:27:45.491808] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:48.576 [2024-04-25 20:27:46.490973] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:48.576 request: 00:32:48.576 { 00:32:48.576 "name": "nvme_second", 00:32:48.576 "trtype": "tcp", 00:32:48.576 "traddr": "10.0.0.2", 00:32:48.576 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:48.576 "adrfam": "ipv4", 00:32:48.576 "trsvcid": "8010", 00:32:48.576 "attach_timeout_ms": 3000, 00:32:48.576 "method": "bdev_nvme_start_discovery", 00:32:48.576 "req_id": 1 00:32:48.576 } 00:32:48.576 Got JSON-RPC error response 00:32:48.576 response: 00:32:48.576 { 00:32:48.576 "code": -110, 00:32:48.576 "message": "Connection timed out" 00:32:48.576 } 00:32:48.576 20:27:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:32:48.576 20:27:46 -- common/autotest_common.sh@643 -- # es=1 00:32:48.576 20:27:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:48.576 20:27:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:48.576 20:27:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:48.576 20:27:46 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:32:48.576 20:27:46 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:48.576 20:27:46 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:48.576 20:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.576 20:27:46 -- host/discovery.sh@67 -- # sort 00:32:48.576 20:27:46 -- host/discovery.sh@67 -- # xargs 00:32:48.576 20:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:48.837 20:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.837 20:27:46 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:32:48.837 20:27:46 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:32:48.837 20:27:46 -- host/discovery.sh@162 -- # kill 1744590 00:32:48.837 20:27:46 -- host/discovery.sh@163 -- # nvmftestfini 00:32:48.838 20:27:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:48.838 20:27:46 -- nvmf/common.sh@116 -- # sync 00:32:48.838 20:27:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:48.838 20:27:46 -- nvmf/common.sh@119 -- # set +e 00:32:48.838 20:27:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:48.838 20:27:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:48.838 rmmod nvme_tcp 00:32:48.838 rmmod nvme_fabrics 00:32:48.838 rmmod nvme_keyring 00:32:48.838 20:27:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:48.838 20:27:46 -- nvmf/common.sh@123 -- # set -e 00:32:48.838 20:27:46 -- nvmf/common.sh@124 -- # return 0 00:32:48.838 20:27:46 -- nvmf/common.sh@477 -- # '[' -n 1744563 ']' 00:32:48.838 20:27:46 -- nvmf/common.sh@478 -- # killprocess 1744563 00:32:48.838 20:27:46 -- common/autotest_common.sh@926 -- # '[' -z 1744563 ']' 00:32:48.838 20:27:46 -- common/autotest_common.sh@930 -- # kill -0 1744563 00:32:48.838 20:27:46 -- common/autotest_common.sh@931 -- # uname 00:32:48.838 20:27:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:48.838 20:27:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1744563 00:32:48.838 20:27:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:48.838 20:27:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:48.838 20:27:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1744563' 00:32:48.838 killing process with pid 1744563 00:32:48.838 20:27:46 -- common/autotest_common.sh@945 -- # kill 1744563 00:32:48.838 20:27:46 -- common/autotest_common.sh@950 -- # wait 1744563 00:32:49.408 20:27:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:49.408 20:27:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:49.408 20:27:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:49.408 20:27:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:49.408 20:27:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:49.408 20:27:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.408 20:27:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:49.408 20:27:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.340 20:27:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:51.340 00:32:51.340 real 0m20.321s 00:32:51.340 user 0m27.051s 00:32:51.340 sys 0m5.207s 00:32:51.340 20:27:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:51.340 20:27:49 -- common/autotest_common.sh@10 -- # set +x 00:32:51.340 ************************************ 00:32:51.340 END TEST nvmf_discovery 00:32:51.340 ************************************ 00:32:51.340 20:27:49 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:51.340 20:27:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:51.340 20:27:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:51.340 20:27:49 -- common/autotest_common.sh@10 -- # set +x 00:32:51.340 ************************************ 00:32:51.340 START TEST nvmf_discovery_remove_ifc 00:32:51.340 ************************************ 00:32:51.340 20:27:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:51.656 * Looking for test storage... 00:32:51.656 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:32:51.656 20:27:49 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.656 20:27:49 -- nvmf/common.sh@7 -- # uname -s 00:32:51.656 20:27:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.656 20:27:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.656 20:27:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.656 20:27:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.656 20:27:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.656 20:27:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.656 20:27:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.656 20:27:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.656 20:27:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.656 20:27:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.656 20:27:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:51.656 20:27:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:51.656 20:27:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.656 20:27:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.656 20:27:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:51.656 20:27:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:51.656 20:27:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.656 20:27:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.656 20:27:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.656 20:27:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.656 20:27:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.656 20:27:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.656 20:27:49 -- paths/export.sh@5 -- # export PATH 00:32:51.656 20:27:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.656 20:27:49 -- nvmf/common.sh@46 -- # : 0 00:32:51.656 20:27:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:51.656 20:27:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:51.656 20:27:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:51.656 20:27:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.656 20:27:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.656 20:27:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:51.656 20:27:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:51.656 20:27:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:51.656 20:27:49 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:51.656 20:27:49 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:51.656 20:27:49 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:51.656 20:27:49 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:51.656 20:27:49 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:51.656 20:27:49 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:51.656 20:27:49 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:51.656 20:27:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:51.656 20:27:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.656 20:27:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:51.656 20:27:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:51.656 20:27:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:51.656 20:27:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.656 20:27:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:51.656 20:27:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.656 20:27:49 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:32:51.656 20:27:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:51.656 20:27:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:51.656 20:27:49 -- common/autotest_common.sh@10 -- # set +x 00:32:56.929 20:27:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:56.929 20:27:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:56.929 20:27:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:56.929 20:27:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:56.929 20:27:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:56.929 20:27:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:56.929 20:27:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:56.929 20:27:54 -- nvmf/common.sh@294 -- # net_devs=() 00:32:56.929 20:27:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:56.929 20:27:54 -- nvmf/common.sh@295 -- # e810=() 00:32:56.929 20:27:54 -- nvmf/common.sh@295 -- # local -ga e810 00:32:56.929 20:27:54 -- nvmf/common.sh@296 -- # x722=() 00:32:56.929 20:27:54 -- nvmf/common.sh@296 -- # local -ga x722 00:32:56.929 20:27:54 -- nvmf/common.sh@297 -- # mlx=() 00:32:56.929 20:27:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:56.929 20:27:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.929 20:27:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:56.929 20:27:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:56.929 20:27:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:56.929 20:27:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:32:56.929 Found 0000:27:00.0 (0x8086 - 0x159b) 00:32:56.929 20:27:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:56.929 20:27:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:32:56.929 Found 0000:27:00.1 (0x8086 - 0x159b) 00:32:56.929 20:27:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:56.929 20:27:54 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:56.929 20:27:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.929 20:27:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:56.929 20:27:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.929 20:27:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:32:56.929 Found net devices under 0000:27:00.0: cvl_0_0 00:32:56.929 20:27:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.929 20:27:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:56.929 20:27:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.929 20:27:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:56.929 20:27:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.929 20:27:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:32:56.929 Found net devices under 0000:27:00.1: cvl_0_1 00:32:56.929 20:27:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.929 20:27:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:56.929 20:27:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:56.929 20:27:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:56.929 20:27:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:56.929 20:27:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.929 20:27:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.929 20:27:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.929 20:27:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:56.930 20:27:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.930 20:27:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.930 20:27:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:56.930 20:27:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.930 20:27:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.930 20:27:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:56.930 20:27:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:56.930 20:27:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.930 20:27:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.190 20:27:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.190 20:27:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.190 20:27:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:57.190 20:27:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.190 20:27:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.190 20:27:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.452 20:27:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:57.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:32:57.452 00:32:57.452 --- 10.0.0.2 ping statistics --- 00:32:57.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.452 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:32:57.452 20:27:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:32:57.452 00:32:57.452 --- 10.0.0.1 ping statistics --- 00:32:57.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.452 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:32:57.452 20:27:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.452 20:27:55 -- nvmf/common.sh@410 -- # return 0 00:32:57.452 20:27:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:57.452 20:27:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.452 20:27:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:57.452 20:27:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:57.452 20:27:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.452 20:27:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:57.452 20:27:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:57.452 20:27:55 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:57.452 20:27:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:57.452 20:27:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:57.452 20:27:55 -- common/autotest_common.sh@10 -- # set +x 00:32:57.452 20:27:55 -- nvmf/common.sh@469 -- # nvmfpid=1751033 00:32:57.452 20:27:55 -- nvmf/common.sh@470 -- # waitforlisten 1751033 00:32:57.452 20:27:55 -- common/autotest_common.sh@819 -- # '[' -z 1751033 ']' 00:32:57.452 20:27:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.452 20:27:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:57.452 20:27:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.452 20:27:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:57.452 20:27:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:57.452 20:27:55 -- common/autotest_common.sh@10 -- # set +x 00:32:57.452 [2024-04-25 20:27:55.267625] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:57.452 [2024-04-25 20:27:55.267750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.452 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.713 [2024-04-25 20:27:55.410763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.713 [2024-04-25 20:27:55.508582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:57.713 [2024-04-25 20:27:55.508794] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.713 [2024-04-25 20:27:55.508809] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.713 [2024-04-25 20:27:55.508819] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.713 [2024-04-25 20:27:55.508857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.284 20:27:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:58.284 20:27:55 -- common/autotest_common.sh@852 -- # return 0 00:32:58.284 20:27:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:58.284 20:27:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:58.284 20:27:55 -- common/autotest_common.sh@10 -- # set +x 00:32:58.284 20:27:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.284 20:27:56 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:58.284 20:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.284 20:27:56 -- common/autotest_common.sh@10 -- # set +x 00:32:58.284 [2024-04-25 20:27:56.035562] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.284 [2024-04-25 20:27:56.043776] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:58.284 null0 00:32:58.284 [2024-04-25 20:27:56.075657] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.284 20:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.284 20:27:56 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1751342 00:32:58.284 20:27:56 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1751342 /tmp/host.sock 00:32:58.284 20:27:56 -- common/autotest_common.sh@819 -- # '[' -z 1751342 ']' 00:32:58.284 20:27:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:32:58.284 20:27:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:58.284 20:27:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:58.284 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:58.284 20:27:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:58.284 20:27:56 -- common/autotest_common.sh@10 -- # set +x 00:32:58.284 20:27:56 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:58.284 [2024-04-25 20:27:56.174483] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:58.284 [2024-04-25 20:27:56.174601] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751342 ] 00:32:58.544 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.544 [2024-04-25 20:27:56.290874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.544 [2024-04-25 20:27:56.385919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:58.544 [2024-04-25 20:27:56.386106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.113 20:27:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:59.113 20:27:56 -- common/autotest_common.sh@852 -- # return 0 00:32:59.113 20:27:56 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:59.113 20:27:56 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:59.113 20:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:59.113 20:27:56 -- common/autotest_common.sh@10 -- # set +x 00:32:59.113 20:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:59.113 20:27:56 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:59.113 20:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:59.113 20:27:56 -- common/autotest_common.sh@10 -- # set +x 00:32:59.373 20:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:59.373 20:27:57 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:59.373 20:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:59.373 20:27:57 -- common/autotest_common.sh@10 -- # set +x 00:33:00.309 [2024-04-25 20:27:58.101884] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:00.309 [2024-04-25 20:27:58.101914] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:00.309 [2024-04-25 20:27:58.101935] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:00.309 [2024-04-25 20:27:58.189994] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:00.569 [2024-04-25 20:27:58.375771] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:00.569 [2024-04-25 20:27:58.375823] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:00.569 [2024-04-25 20:27:58.375860] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:00.569 [2024-04-25 20:27:58.375880] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:00.569 [2024-04-25 20:27:58.375906] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:00.569 20:27:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:00.569 20:27:58 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:00.569 [2024-04-25 20:27:58.378074] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x613000003f40 was disconnected and freed. delete nvme_qpair. 00:33:00.569 20:27:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:00.569 20:27:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:00.569 20:27:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.569 20:27:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:00.569 20:27:58 -- common/autotest_common.sh@10 -- # set +x 00:33:00.569 20:27:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:00.569 20:27:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:00.569 20:27:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:00.569 20:27:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:00.569 20:27:58 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:00.569 20:27:58 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:00.829 20:27:58 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:00.829 20:27:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:00.829 20:27:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.829 20:27:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:00.829 20:27:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:00.829 20:27:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:00.829 20:27:58 -- common/autotest_common.sh@10 -- # set +x 00:33:00.829 20:27:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:00.829 20:27:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:00.829 20:27:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:00.829 20:27:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:01.766 20:27:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:01.767 20:27:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:01.767 20:27:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:01.767 20:27:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:01.767 20:27:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:01.767 20:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:01.767 20:27:59 -- common/autotest_common.sh@10 -- # set +x 00:33:01.767 20:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:01.767 20:27:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:01.767 20:27:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:03.141 20:28:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:03.141 20:28:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:03.141 20:28:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:03.141 20:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:03.141 20:28:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:03.141 20:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:03.141 20:28:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:03.141 20:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:03.141 20:28:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:03.141 20:28:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:04.078 20:28:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:04.078 20:28:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:04.078 20:28:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:04.078 20:28:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:04.078 20:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:04.078 20:28:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:04.078 20:28:01 -- common/autotest_common.sh@10 -- # set +x 00:33:04.078 20:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:04.078 20:28:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:04.078 20:28:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:05.015 20:28:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:05.015 20:28:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:05.015 20:28:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:05.015 20:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.015 20:28:02 -- common/autotest_common.sh@10 -- # set +x 00:33:05.015 20:28:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:05.015 20:28:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:05.015 20:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.015 20:28:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:05.015 20:28:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:05.957 20:28:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:05.958 20:28:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:05.958 20:28:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:05.958 20:28:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:05.958 20:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.958 20:28:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:05.958 20:28:03 -- common/autotest_common.sh@10 -- # set +x 00:33:05.958 20:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.958 [2024-04-25 20:28:03.803266] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:05.958 [2024-04-25 20:28:03.803330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.958 [2024-04-25 20:28:03.803345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.958 [2024-04-25 20:28:03.803359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.958 [2024-04-25 20:28:03.803368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.958 [2024-04-25 20:28:03.803376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.958 [2024-04-25 20:28:03.803385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.958 [2024-04-25 20:28:03.803393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.958 [2024-04-25 20:28:03.803401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.958 [2024-04-25 20:28:03.803410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.958 [2024-04-25 20:28:03.803419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.958 [2024-04-25 20:28:03.803427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:05.958 20:28:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:05.958 20:28:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:05.958 [2024-04-25 20:28:03.813257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:05.958 [2024-04-25 20:28:03.823276] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:06.890 20:28:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:06.890 20:28:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:06.890 20:28:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:06.890 20:28:04 -- common/autotest_common.sh@10 -- # set +x 00:33:06.890 20:28:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:06.890 20:28:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:06.890 20:28:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:07.149 [2024-04-25 20:28:04.884517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:08.086 [2024-04-25 20:28:05.907541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:08.086 [2024-04-25 20:28:05.907611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:08.086 [2024-04-25 20:28:05.907635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:08.086 [2024-04-25 20:28:05.908232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:08.086 [2024-04-25 20:28:05.908270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.086 [2024-04-25 20:28:05.908317] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:08.086 [2024-04-25 20:28:05.908354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:08.086 [2024-04-25 20:28:05.908372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.086 [2024-04-25 20:28:05.908391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:08.086 [2024-04-25 20:28:05.908405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.086 [2024-04-25 20:28:05.908420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:08.086 [2024-04-25 20:28:05.908435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.086 [2024-04-25 20:28:05.908451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:08.086 [2024-04-25 20:28:05.908466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.086 [2024-04-25 20:28:05.908483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:08.086 [2024-04-25 20:28:05.908520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.086 [2024-04-25 20:28:05.908536] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:08.086 [2024-04-25 20:28:05.908681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6130000034c0 (9): Bad file descriptor 00:33:08.086 [2024-04-25 20:28:05.909693] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:08.086 [2024-04-25 20:28:05.909711] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:08.086 20:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:08.086 20:28:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:08.086 20:28:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:09.021 20:28:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:09.021 20:28:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:09.021 20:28:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.021 20:28:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:09.021 20:28:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.021 20:28:06 -- common/autotest_common.sh@10 -- # set +x 00:33:09.021 20:28:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:09.021 20:28:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.281 20:28:06 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:09.281 20:28:06 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.281 20:28:06 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.281 20:28:07 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:09.281 20:28:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:09.281 20:28:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.281 20:28:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:09.281 20:28:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.281 20:28:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:09.281 20:28:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:09.281 20:28:07 -- common/autotest_common.sh@10 -- # set +x 00:33:09.281 20:28:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.281 20:28:07 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:09.281 20:28:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:10.222 [2024-04-25 20:28:07.963562] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:10.222 [2024-04-25 20:28:07.963588] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:10.222 [2024-04-25 20:28:07.963608] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:10.222 [2024-04-25 20:28:08.095721] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:10.222 20:28:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:10.223 20:28:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:10.223 20:28:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:10.223 20:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:10.223 20:28:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:10.223 20:28:08 -- common/autotest_common.sh@10 -- # set +x 00:33:10.223 20:28:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:10.223 20:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:10.223 20:28:08 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:10.223 20:28:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:10.481 [2024-04-25 20:28:08.193086] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:10.481 [2024-04-25 20:28:08.193138] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:10.481 [2024-04-25 20:28:08.193172] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:10.481 [2024-04-25 20:28:08.193191] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:10.481 [2024-04-25 20:28:08.193203] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:10.481 [2024-04-25 20:28:08.199801] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x613000004d40 was disconnected and freed. delete nvme_qpair. 00:33:11.417 20:28:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:11.417 20:28:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.417 20:28:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:11.417 20:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:11.417 20:28:09 -- common/autotest_common.sh@10 -- # set +x 00:33:11.417 20:28:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:11.417 20:28:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:11.417 20:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:11.417 20:28:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:11.417 20:28:09 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:11.417 20:28:09 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1751342 00:33:11.417 20:28:09 -- common/autotest_common.sh@926 -- # '[' -z 1751342 ']' 00:33:11.417 20:28:09 -- common/autotest_common.sh@930 -- # kill -0 1751342 00:33:11.417 20:28:09 -- common/autotest_common.sh@931 -- # uname 00:33:11.417 20:28:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:11.417 20:28:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1751342 00:33:11.417 20:28:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:11.417 20:28:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:11.418 20:28:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1751342' 00:33:11.418 killing process with pid 1751342 00:33:11.418 20:28:09 -- common/autotest_common.sh@945 -- # kill 1751342 00:33:11.418 20:28:09 -- common/autotest_common.sh@950 -- # wait 1751342 00:33:11.678 20:28:09 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:11.678 20:28:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:11.678 20:28:09 -- nvmf/common.sh@116 -- # sync 00:33:11.678 20:28:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:11.678 20:28:09 -- nvmf/common.sh@119 -- # set +e 00:33:11.678 20:28:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:11.678 20:28:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:11.678 rmmod nvme_tcp 00:33:11.937 rmmod nvme_fabrics 00:33:11.937 rmmod nvme_keyring 00:33:11.937 20:28:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:11.937 20:28:09 -- nvmf/common.sh@123 -- # set -e 00:33:11.937 20:28:09 -- nvmf/common.sh@124 -- # return 0 00:33:11.937 20:28:09 -- nvmf/common.sh@477 -- # '[' -n 1751033 ']' 00:33:11.937 20:28:09 -- nvmf/common.sh@478 -- # killprocess 1751033 00:33:11.937 20:28:09 -- common/autotest_common.sh@926 -- # '[' -z 1751033 ']' 00:33:11.937 20:28:09 -- common/autotest_common.sh@930 -- # kill -0 1751033 00:33:11.937 20:28:09 -- common/autotest_common.sh@931 -- # uname 00:33:11.937 20:28:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:11.937 20:28:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1751033 00:33:11.937 20:28:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:11.937 20:28:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:11.937 20:28:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1751033' 00:33:11.937 killing process with pid 1751033 00:33:11.937 20:28:09 -- common/autotest_common.sh@945 -- # kill 1751033 00:33:11.937 20:28:09 -- common/autotest_common.sh@950 -- # wait 1751033 00:33:12.506 20:28:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:12.506 20:28:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:12.506 20:28:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:12.506 20:28:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:12.506 20:28:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:12.506 20:28:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.506 20:28:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:12.506 20:28:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.412 20:28:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:14.412 00:33:14.412 real 0m23.030s 00:33:14.412 user 0m28.133s 00:33:14.412 sys 0m5.643s 00:33:14.412 20:28:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:14.412 20:28:12 -- common/autotest_common.sh@10 -- # set +x 00:33:14.412 ************************************ 00:33:14.412 END TEST nvmf_discovery_remove_ifc 00:33:14.412 ************************************ 00:33:14.412 20:28:12 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:33:14.412 20:28:12 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:14.412 20:28:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:14.412 20:28:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:14.412 20:28:12 -- common/autotest_common.sh@10 -- # set +x 00:33:14.412 ************************************ 00:33:14.412 START TEST nvmf_digest 00:33:14.412 ************************************ 00:33:14.412 20:28:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:14.671 * Looking for test storage... 00:33:14.671 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:33:14.671 20:28:12 -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:33:14.671 20:28:12 -- nvmf/common.sh@7 -- # uname -s 00:33:14.671 20:28:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.671 20:28:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.671 20:28:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.671 20:28:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.671 20:28:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.671 20:28:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.671 20:28:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.671 20:28:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.671 20:28:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.671 20:28:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.671 20:28:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:33:14.671 20:28:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:33:14.671 20:28:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.671 20:28:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.671 20:28:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:14.671 20:28:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:14.671 20:28:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.671 20:28:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.671 20:28:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.671 20:28:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.671 20:28:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.671 20:28:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.671 20:28:12 -- paths/export.sh@5 -- # export PATH 00:33:14.671 20:28:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.671 20:28:12 -- nvmf/common.sh@46 -- # : 0 00:33:14.671 20:28:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:14.671 20:28:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:14.671 20:28:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:14.671 20:28:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.671 20:28:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.671 20:28:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:14.671 20:28:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:14.671 20:28:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:14.671 20:28:12 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:14.671 20:28:12 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:14.671 20:28:12 -- host/digest.sh@16 -- # runtime=2 00:33:14.671 20:28:12 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:33:14.671 20:28:12 -- host/digest.sh@132 -- # nvmftestinit 00:33:14.671 20:28:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:14.671 20:28:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.671 20:28:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:14.671 20:28:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:14.671 20:28:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:14.671 20:28:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.671 20:28:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:14.671 20:28:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.671 20:28:12 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:33:14.671 20:28:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:14.671 20:28:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:14.671 20:28:12 -- common/autotest_common.sh@10 -- # set +x 00:33:20.041 20:28:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:20.041 20:28:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:20.041 20:28:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:20.041 20:28:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:20.041 20:28:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:20.041 20:28:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:20.041 20:28:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:20.041 20:28:17 -- nvmf/common.sh@294 -- # net_devs=() 00:33:20.041 20:28:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:20.041 20:28:17 -- nvmf/common.sh@295 -- # e810=() 00:33:20.041 20:28:17 -- nvmf/common.sh@295 -- # local -ga e810 00:33:20.041 20:28:17 -- nvmf/common.sh@296 -- # x722=() 00:33:20.041 20:28:17 -- nvmf/common.sh@296 -- # local -ga x722 00:33:20.041 20:28:17 -- nvmf/common.sh@297 -- # mlx=() 00:33:20.041 20:28:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:20.041 20:28:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.041 20:28:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:20.041 20:28:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:20.041 20:28:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:20.041 20:28:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:33:20.041 Found 0000:27:00.0 (0x8086 - 0x159b) 00:33:20.041 20:28:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:20.041 20:28:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:33:20.041 Found 0000:27:00.1 (0x8086 - 0x159b) 00:33:20.041 20:28:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:20.041 20:28:17 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:20.041 20:28:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.041 20:28:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:20.041 20:28:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.041 20:28:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:33:20.041 Found net devices under 0000:27:00.0: cvl_0_0 00:33:20.041 20:28:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.041 20:28:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:20.041 20:28:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.041 20:28:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:20.041 20:28:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.041 20:28:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:33:20.041 Found net devices under 0000:27:00.1: cvl_0_1 00:33:20.041 20:28:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.041 20:28:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:20.041 20:28:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:20.041 20:28:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:20.041 20:28:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.041 20:28:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.041 20:28:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.041 20:28:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:20.041 20:28:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.041 20:28:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.041 20:28:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:20.041 20:28:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.041 20:28:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.041 20:28:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:20.041 20:28:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:20.041 20:28:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.041 20:28:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.041 20:28:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.041 20:28:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.041 20:28:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:20.041 20:28:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.041 20:28:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.041 20:28:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.041 20:28:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:20.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:33:20.041 00:33:20.041 --- 10.0.0.2 ping statistics --- 00:33:20.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.041 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:33:20.041 20:28:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:33:20.041 00:33:20.041 --- 10.0.0.1 ping statistics --- 00:33:20.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.041 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:33:20.041 20:28:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.041 20:28:17 -- nvmf/common.sh@410 -- # return 0 00:33:20.041 20:28:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:20.041 20:28:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.041 20:28:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:20.041 20:28:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.041 20:28:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:20.041 20:28:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:20.041 20:28:17 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:20.041 20:28:17 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:33:20.041 20:28:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:20.041 20:28:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:20.041 20:28:17 -- common/autotest_common.sh@10 -- # set +x 00:33:20.041 ************************************ 00:33:20.041 START TEST nvmf_digest_clean 00:33:20.041 ************************************ 00:33:20.041 20:28:17 -- common/autotest_common.sh@1104 -- # run_digest 00:33:20.041 20:28:17 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:33:20.041 20:28:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:20.041 20:28:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:20.041 20:28:17 -- common/autotest_common.sh@10 -- # set +x 00:33:20.041 20:28:17 -- nvmf/common.sh@469 -- # nvmfpid=1757920 00:33:20.041 20:28:17 -- nvmf/common.sh@470 -- # waitforlisten 1757920 00:33:20.041 20:28:17 -- common/autotest_common.sh@819 -- # '[' -z 1757920 ']' 00:33:20.041 20:28:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.042 20:28:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:20.042 20:28:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.042 20:28:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:20.042 20:28:17 -- common/autotest_common.sh@10 -- # set +x 00:33:20.042 20:28:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:20.042 [2024-04-25 20:28:17.718778] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:20.042 [2024-04-25 20:28:17.718881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.042 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.042 [2024-04-25 20:28:17.834937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.042 [2024-04-25 20:28:17.930410] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:20.042 [2024-04-25 20:28:17.930585] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.042 [2024-04-25 20:28:17.930598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.042 [2024-04-25 20:28:17.930607] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.042 [2024-04-25 20:28:17.930631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.610 20:28:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:20.610 20:28:18 -- common/autotest_common.sh@852 -- # return 0 00:33:20.610 20:28:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:20.610 20:28:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:20.610 20:28:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.610 20:28:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.610 20:28:18 -- host/digest.sh@120 -- # common_target_config 00:33:20.610 20:28:18 -- host/digest.sh@43 -- # rpc_cmd 00:33:20.610 20:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.610 20:28:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.870 null0 00:33:20.870 [2024-04-25 20:28:18.602402] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.870 [2024-04-25 20:28:18.626536] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.870 20:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.870 20:28:18 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:33:20.870 20:28:18 -- host/digest.sh@77 -- # local rw bs qd 00:33:20.870 20:28:18 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:20.870 20:28:18 -- host/digest.sh@80 -- # rw=randread 00:33:20.870 20:28:18 -- host/digest.sh@80 -- # bs=4096 00:33:20.870 20:28:18 -- host/digest.sh@80 -- # qd=128 00:33:20.870 20:28:18 -- host/digest.sh@82 -- # bperfpid=1758037 00:33:20.870 20:28:18 -- host/digest.sh@83 -- # waitforlisten 1758037 /var/tmp/bperf.sock 00:33:20.870 20:28:18 -- common/autotest_common.sh@819 -- # '[' -z 1758037 ']' 00:33:20.870 20:28:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.870 20:28:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:20.870 20:28:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.870 20:28:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:20.870 20:28:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.870 20:28:18 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:20.870 [2024-04-25 20:28:18.702161] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:20.870 [2024-04-25 20:28:18.702278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758037 ] 00:33:20.870 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.129 [2024-04-25 20:28:18.820080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.129 [2024-04-25 20:28:18.918150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.696 20:28:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:21.696 20:28:19 -- common/autotest_common.sh@852 -- # return 0 00:33:21.696 20:28:19 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:33:21.696 20:28:19 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:33:21.696 20:28:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:33:21.696 [2024-04-25 20:28:19.522685] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:33:21.696 20:28:19 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:33:21.696 20:28:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:26.986 20:28:24 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.986 20:28:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.246 nvme0n1 00:33:27.246 20:28:25 -- host/digest.sh@91 -- # bperf_py perform_tests 00:33:27.246 20:28:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:27.505 Running I/O for 2 seconds... 00:33:29.408 00:33:29.408 Latency(us) 00:33:29.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.408 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:29.408 nvme0n1 : 2.00 20874.48 81.54 0.00 0.00 6126.67 2431.73 12900.24 00:33:29.408 =================================================================================================================== 00:33:29.408 Total : 20874.48 81.54 0.00 0.00 6126.67 2431.73 12900.24 00:33:29.408 0 00:33:29.408 20:28:27 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:33:29.408 20:28:27 -- host/digest.sh@92 -- # get_accel_stats 00:33:29.408 20:28:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:29.408 20:28:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:29.408 20:28:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:29.408 | select(.opcode=="crc32c") 00:33:29.408 | "\(.module_name) \(.executed)"' 00:33:29.669 20:28:27 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:33:29.669 20:28:27 -- host/digest.sh@93 -- # exp_module=dsa 00:33:29.669 20:28:27 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:33:29.669 20:28:27 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:33:29.669 20:28:27 -- host/digest.sh@97 -- # killprocess 1758037 00:33:29.669 20:28:27 -- common/autotest_common.sh@926 -- # '[' -z 1758037 ']' 00:33:29.669 20:28:27 -- common/autotest_common.sh@930 -- # kill -0 1758037 00:33:29.669 20:28:27 -- common/autotest_common.sh@931 -- # uname 00:33:29.669 20:28:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:29.669 20:28:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1758037 00:33:29.669 20:28:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:29.669 20:28:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:29.669 20:28:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1758037' 00:33:29.669 killing process with pid 1758037 00:33:29.669 20:28:27 -- common/autotest_common.sh@945 -- # kill 1758037 00:33:29.669 Received shutdown signal, test time was about 2.000000 seconds 00:33:29.669 00:33:29.669 Latency(us) 00:33:29.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.669 =================================================================================================================== 00:33:29.669 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:29.669 20:28:27 -- common/autotest_common.sh@950 -- # wait 1758037 00:33:31.050 20:28:28 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:33:31.050 20:28:28 -- host/digest.sh@77 -- # local rw bs qd 00:33:31.050 20:28:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:31.050 20:28:28 -- host/digest.sh@80 -- # rw=randread 00:33:31.050 20:28:28 -- host/digest.sh@80 -- # bs=131072 00:33:31.050 20:28:28 -- host/digest.sh@80 -- # qd=16 00:33:31.050 20:28:28 -- host/digest.sh@82 -- # bperfpid=1760122 00:33:31.050 20:28:28 -- host/digest.sh@83 -- # waitforlisten 1760122 /var/tmp/bperf.sock 00:33:31.050 20:28:28 -- common/autotest_common.sh@819 -- # '[' -z 1760122 ']' 00:33:31.050 20:28:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:31.050 20:28:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:31.050 20:28:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:31.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:31.050 20:28:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:31.050 20:28:28 -- common/autotest_common.sh@10 -- # set +x 00:33:31.050 20:28:28 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:31.050 [2024-04-25 20:28:28.899440] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:31.050 [2024-04-25 20:28:28.899557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760122 ] 00:33:31.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:31.050 Zero copy mechanism will not be used. 00:33:31.050 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.310 [2024-04-25 20:28:29.012165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.310 [2024-04-25 20:28:29.106305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.882 20:28:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:31.882 20:28:29 -- common/autotest_common.sh@852 -- # return 0 00:33:31.882 20:28:29 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:33:31.882 20:28:29 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:33:31.882 20:28:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:33:31.882 [2024-04-25 20:28:29.738835] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:33:31.882 20:28:29 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:33:31.882 20:28:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:37.160 20:28:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.160 20:28:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.418 nvme0n1 00:33:37.418 20:28:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:33:37.418 20:28:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.418 Zero copy mechanism will not be used. 00:33:37.418 Running I/O for 2 seconds... 00:33:39.324 00:33:39.324 Latency(us) 00:33:39.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.324 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:39.324 nvme0n1 : 2.00 6439.43 804.93 0.00 0.00 2482.43 595.00 6139.69 00:33:39.324 =================================================================================================================== 00:33:39.324 Total : 6439.43 804.93 0.00 0.00 2482.43 595.00 6139.69 00:33:39.324 0 00:33:39.583 20:28:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:33:39.583 20:28:37 -- host/digest.sh@92 -- # get_accel_stats 00:33:39.583 20:28:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:39.583 20:28:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:39.583 | select(.opcode=="crc32c") 00:33:39.583 | "\(.module_name) \(.executed)"' 00:33:39.583 20:28:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:39.583 20:28:37 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:33:39.583 20:28:37 -- host/digest.sh@93 -- # exp_module=dsa 00:33:39.583 20:28:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:33:39.583 20:28:37 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:33:39.583 20:28:37 -- host/digest.sh@97 -- # killprocess 1760122 00:33:39.583 20:28:37 -- common/autotest_common.sh@926 -- # '[' -z 1760122 ']' 00:33:39.583 20:28:37 -- common/autotest_common.sh@930 -- # kill -0 1760122 00:33:39.583 20:28:37 -- common/autotest_common.sh@931 -- # uname 00:33:39.583 20:28:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:39.583 20:28:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1760122 00:33:39.583 20:28:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:39.583 20:28:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:39.583 20:28:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1760122' 00:33:39.583 killing process with pid 1760122 00:33:39.583 20:28:37 -- common/autotest_common.sh@945 -- # kill 1760122 00:33:39.583 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.583 00:33:39.583 Latency(us) 00:33:39.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.583 =================================================================================================================== 00:33:39.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.583 20:28:37 -- common/autotest_common.sh@950 -- # wait 1760122 00:33:40.969 20:28:38 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:33:40.969 20:28:38 -- host/digest.sh@77 -- # local rw bs qd 00:33:40.969 20:28:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:40.969 20:28:38 -- host/digest.sh@80 -- # rw=randwrite 00:33:40.969 20:28:38 -- host/digest.sh@80 -- # bs=4096 00:33:40.969 20:28:38 -- host/digest.sh@80 -- # qd=128 00:33:40.969 20:28:38 -- host/digest.sh@82 -- # bperfpid=1761935 00:33:40.969 20:28:38 -- host/digest.sh@83 -- # waitforlisten 1761935 /var/tmp/bperf.sock 00:33:40.969 20:28:38 -- common/autotest_common.sh@819 -- # '[' -z 1761935 ']' 00:33:40.969 20:28:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.969 20:28:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:40.969 20:28:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.969 20:28:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:40.969 20:28:38 -- common/autotest_common.sh@10 -- # set +x 00:33:40.969 20:28:38 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:41.228 [2024-04-25 20:28:38.940933] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:41.228 [2024-04-25 20:28:38.941061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761935 ] 00:33:41.228 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.228 [2024-04-25 20:28:39.060900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.228 [2024-04-25 20:28:39.155338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.794 20:28:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:41.794 20:28:39 -- common/autotest_common.sh@852 -- # return 0 00:33:41.794 20:28:39 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:33:41.794 20:28:39 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:33:41.794 20:28:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:33:42.052 [2024-04-25 20:28:39.763850] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:33:42.052 20:28:39 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:33:42.052 20:28:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:47.341 20:28:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:47.341 20:28:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:47.341 nvme0n1 00:33:47.341 20:28:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:33:47.341 20:28:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:47.600 Running I/O for 2 seconds... 00:33:49.587 00:33:49.587 Latency(us) 00:33:49.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.587 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:49.587 nvme0n1 : 2.00 28199.61 110.15 0.00 0.00 4529.90 2121.30 10485.76 00:33:49.587 =================================================================================================================== 00:33:49.587 Total : 28199.61 110.15 0.00 0.00 4529.90 2121.30 10485.76 00:33:49.587 0 00:33:49.587 20:28:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:33:49.587 20:28:47 -- host/digest.sh@92 -- # get_accel_stats 00:33:49.587 20:28:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:49.587 20:28:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:49.587 | select(.opcode=="crc32c") 00:33:49.587 | "\(.module_name) \(.executed)"' 00:33:49.587 20:28:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:49.587 20:28:47 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:33:49.587 20:28:47 -- host/digest.sh@93 -- # exp_module=dsa 00:33:49.587 20:28:47 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:33:49.587 20:28:47 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:33:49.587 20:28:47 -- host/digest.sh@97 -- # killprocess 1761935 00:33:49.587 20:28:47 -- common/autotest_common.sh@926 -- # '[' -z 1761935 ']' 00:33:49.587 20:28:47 -- common/autotest_common.sh@930 -- # kill -0 1761935 00:33:49.587 20:28:47 -- common/autotest_common.sh@931 -- # uname 00:33:49.587 20:28:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:49.587 20:28:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1761935 00:33:49.848 20:28:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:49.848 20:28:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:49.848 20:28:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1761935' 00:33:49.848 killing process with pid 1761935 00:33:49.848 20:28:47 -- common/autotest_common.sh@945 -- # kill 1761935 00:33:49.848 Received shutdown signal, test time was about 2.000000 seconds 00:33:49.848 00:33:49.848 Latency(us) 00:33:49.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.848 =================================================================================================================== 00:33:49.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:49.848 20:28:47 -- common/autotest_common.sh@950 -- # wait 1761935 00:33:51.227 20:28:48 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:33:51.227 20:28:48 -- host/digest.sh@77 -- # local rw bs qd 00:33:51.227 20:28:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:51.227 20:28:48 -- host/digest.sh@80 -- # rw=randwrite 00:33:51.227 20:28:48 -- host/digest.sh@80 -- # bs=131072 00:33:51.227 20:28:48 -- host/digest.sh@80 -- # qd=16 00:33:51.227 20:28:48 -- host/digest.sh@82 -- # bperfpid=1764024 00:33:51.227 20:28:48 -- host/digest.sh@83 -- # waitforlisten 1764024 /var/tmp/bperf.sock 00:33:51.227 20:28:48 -- common/autotest_common.sh@819 -- # '[' -z 1764024 ']' 00:33:51.227 20:28:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:51.227 20:28:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:51.227 20:28:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:51.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:51.227 20:28:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:51.227 20:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:51.227 20:28:48 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:51.227 [2024-04-25 20:28:49.011156] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:51.227 [2024-04-25 20:28:49.011265] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764024 ] 00:33:51.227 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:51.227 Zero copy mechanism will not be used. 00:33:51.227 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.227 [2024-04-25 20:28:49.124067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.489 [2024-04-25 20:28:49.218269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.057 20:28:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:52.057 20:28:49 -- common/autotest_common.sh@852 -- # return 0 00:33:52.057 20:28:49 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:33:52.057 20:28:49 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:33:52.057 20:28:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:33:52.057 [2024-04-25 20:28:49.834817] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:33:52.057 20:28:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:33:52.057 20:28:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:57.335 20:28:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.335 20:28:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.593 nvme0n1 00:33:57.593 20:28:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:33:57.593 20:28:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:57.593 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:57.593 Zero copy mechanism will not be used. 00:33:57.593 Running I/O for 2 seconds... 00:33:59.496 00:33:59.496 Latency(us) 00:33:59.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.496 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:59.496 nvme0n1 : 2.00 7502.12 937.76 0.00 0.00 2129.01 1483.18 8036.78 00:33:59.496 =================================================================================================================== 00:33:59.496 Total : 7502.12 937.76 0.00 0.00 2129.01 1483.18 8036.78 00:33:59.496 0 00:33:59.496 20:28:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:33:59.496 20:28:57 -- host/digest.sh@92 -- # get_accel_stats 00:33:59.496 20:28:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:59.496 20:28:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:59.496 | select(.opcode=="crc32c") 00:33:59.496 | "\(.module_name) \(.executed)"' 00:33:59.496 20:28:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:59.754 20:28:57 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:33:59.754 20:28:57 -- host/digest.sh@93 -- # exp_module=dsa 00:33:59.754 20:28:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:33:59.754 20:28:57 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:33:59.754 20:28:57 -- host/digest.sh@97 -- # killprocess 1764024 00:33:59.754 20:28:57 -- common/autotest_common.sh@926 -- # '[' -z 1764024 ']' 00:33:59.754 20:28:57 -- common/autotest_common.sh@930 -- # kill -0 1764024 00:33:59.754 20:28:57 -- common/autotest_common.sh@931 -- # uname 00:33:59.754 20:28:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:59.754 20:28:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1764024 00:33:59.754 20:28:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:59.754 20:28:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:59.754 20:28:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1764024' 00:33:59.754 killing process with pid 1764024 00:33:59.754 20:28:57 -- common/autotest_common.sh@945 -- # kill 1764024 00:33:59.754 Received shutdown signal, test time was about 2.000000 seconds 00:33:59.754 00:33:59.754 Latency(us) 00:33:59.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.754 =================================================================================================================== 00:33:59.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:59.754 20:28:57 -- common/autotest_common.sh@950 -- # wait 1764024 00:34:01.133 20:28:58 -- host/digest.sh@126 -- # killprocess 1757920 00:34:01.134 20:28:58 -- common/autotest_common.sh@926 -- # '[' -z 1757920 ']' 00:34:01.134 20:28:58 -- common/autotest_common.sh@930 -- # kill -0 1757920 00:34:01.134 20:28:58 -- common/autotest_common.sh@931 -- # uname 00:34:01.134 20:28:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:01.134 20:28:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1757920 00:34:01.134 20:28:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:01.134 20:28:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:01.134 20:28:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1757920' 00:34:01.134 killing process with pid 1757920 00:34:01.134 20:28:59 -- common/autotest_common.sh@945 -- # kill 1757920 00:34:01.134 20:28:59 -- common/autotest_common.sh@950 -- # wait 1757920 00:34:01.700 00:34:01.700 real 0m41.815s 00:34:01.700 user 1m1.674s 00:34:01.701 sys 0m3.969s 00:34:01.701 20:28:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.701 20:28:59 -- common/autotest_common.sh@10 -- # set +x 00:34:01.701 ************************************ 00:34:01.701 END TEST nvmf_digest_clean 00:34:01.701 ************************************ 00:34:01.701 20:28:59 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:34:01.701 20:28:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:01.701 20:28:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:01.701 20:28:59 -- common/autotest_common.sh@10 -- # set +x 00:34:01.701 ************************************ 00:34:01.701 START TEST nvmf_digest_error 00:34:01.701 ************************************ 00:34:01.701 20:28:59 -- common/autotest_common.sh@1104 -- # run_digest_error 00:34:01.701 20:28:59 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:34:01.701 20:28:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:01.701 20:28:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:01.701 20:28:59 -- common/autotest_common.sh@10 -- # set +x 00:34:01.701 20:28:59 -- nvmf/common.sh@469 -- # nvmfpid=1766145 00:34:01.701 20:28:59 -- nvmf/common.sh@470 -- # waitforlisten 1766145 00:34:01.701 20:28:59 -- common/autotest_common.sh@819 -- # '[' -z 1766145 ']' 00:34:01.701 20:28:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.701 20:28:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:01.701 20:28:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.701 20:28:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:01.701 20:28:59 -- common/autotest_common.sh@10 -- # set +x 00:34:01.701 20:28:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:01.701 [2024-04-25 20:28:59.571001] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:01.701 [2024-04-25 20:28:59.571118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.959 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.959 [2024-04-25 20:28:59.691869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.959 [2024-04-25 20:28:59.788461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:01.959 [2024-04-25 20:28:59.788640] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.959 [2024-04-25 20:28:59.788653] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.959 [2024-04-25 20:28:59.788662] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.959 [2024-04-25 20:28:59.788688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.528 20:29:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:02.528 20:29:00 -- common/autotest_common.sh@852 -- # return 0 00:34:02.528 20:29:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:02.528 20:29:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:02.528 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:34:02.528 20:29:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.528 20:29:00 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:02.528 20:29:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:02.528 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:34:02.528 [2024-04-25 20:29:00.297164] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:02.528 20:29:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:02.528 20:29:00 -- host/digest.sh@104 -- # common_target_config 00:34:02.528 20:29:00 -- host/digest.sh@43 -- # rpc_cmd 00:34:02.528 20:29:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:02.528 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:34:02.528 null0 00:34:02.528 [2024-04-25 20:29:00.455721] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.788 [2024-04-25 20:29:00.479886] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.788 20:29:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:02.788 20:29:00 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:34:02.788 20:29:00 -- host/digest.sh@54 -- # local rw bs qd 00:34:02.788 20:29:00 -- host/digest.sh@56 -- # rw=randread 00:34:02.788 20:29:00 -- host/digest.sh@56 -- # bs=4096 00:34:02.788 20:29:00 -- host/digest.sh@56 -- # qd=128 00:34:02.788 20:29:00 -- host/digest.sh@58 -- # bperfpid=1766181 00:34:02.788 20:29:00 -- host/digest.sh@60 -- # waitforlisten 1766181 /var/tmp/bperf.sock 00:34:02.788 20:29:00 -- common/autotest_common.sh@819 -- # '[' -z 1766181 ']' 00:34:02.788 20:29:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:02.788 20:29:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:02.788 20:29:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:02.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:02.788 20:29:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:02.788 20:29:00 -- common/autotest_common.sh@10 -- # set +x 00:34:02.788 20:29:00 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:02.788 [2024-04-25 20:29:00.555009] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:02.788 [2024-04-25 20:29:00.555129] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766181 ] 00:34:02.788 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.788 [2024-04-25 20:29:00.672212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.047 [2024-04-25 20:29:00.767619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.615 20:29:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:03.615 20:29:01 -- common/autotest_common.sh@852 -- # return 0 00:34:03.615 20:29:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:03.615 20:29:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:03.615 20:29:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:03.615 20:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.615 20:29:01 -- common/autotest_common.sh@10 -- # set +x 00:34:03.615 20:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.615 20:29:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:03.615 20:29:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:03.874 nvme0n1 00:34:03.874 20:29:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:03.874 20:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.874 20:29:01 -- common/autotest_common.sh@10 -- # set +x 00:34:03.874 20:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.874 20:29:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:03.874 20:29:01 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:03.874 Running I/O for 2 seconds... 00:34:03.874 [2024-04-25 20:29:01.669661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.669704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.669718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.682313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.682344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.682356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.694236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.694262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.694273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.710954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.710982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.710993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.723351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.723377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.723387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.735508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.735532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.735542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.747751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.747775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.747790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.759944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.759968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.759977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.772276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.772300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.772309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.783965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.783989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.783998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.874 [2024-04-25 20:29:01.795798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:03.874 [2024-04-25 20:29:01.795821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.874 [2024-04-25 20:29:01.795831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.133 [2024-04-25 20:29:01.807531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.133 [2024-04-25 20:29:01.807556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.133 [2024-04-25 20:29:01.807566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.133 [2024-04-25 20:29:01.819860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.133 [2024-04-25 20:29:01.819883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.819893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.831970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.831995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.832005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.844468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.844495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.844515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.856828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.856852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.856861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.869100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.869122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.869132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.881780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.881803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.881812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.894081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.894106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.894116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.906427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.906450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.906459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.919338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.919361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.919371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.931640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.931663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.931673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.944384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.944407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.944417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.956815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.956838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.956850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.969137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.969160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.969169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.981456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.981479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.981488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:01.993382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:01.993405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:01.993414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:02.006070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:02.006093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:02.006102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:02.018411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:02.018434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:02.018444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:02.030831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:02.030854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:02.030863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:02.043358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:02.043382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:02.043392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.134 [2024-04-25 20:29:02.055820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.134 [2024-04-25 20:29:02.055843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.134 [2024-04-25 20:29:02.055853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.393 [2024-04-25 20:29:02.066971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.393 [2024-04-25 20:29:02.066996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.393 [2024-04-25 20:29:02.067005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.393 [2024-04-25 20:29:02.075503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.393 [2024-04-25 20:29:02.075526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.393 [2024-04-25 20:29:02.075535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.393 [2024-04-25 20:29:02.087321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.393 [2024-04-25 20:29:02.087344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.393 [2024-04-25 20:29:02.087352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.393 [2024-04-25 20:29:02.098378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.393 [2024-04-25 20:29:02.098401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.393 [2024-04-25 20:29:02.098411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.393 [2024-04-25 20:29:02.110588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.393 [2024-04-25 20:29:02.110611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.393 [2024-04-25 20:29:02.110620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.393 [2024-04-25 20:29:02.123210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.393 [2024-04-25 20:29:02.123234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.393 [2024-04-25 20:29:02.123244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.393 [2024-04-25 20:29:02.136037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.393 [2024-04-25 20:29:02.136061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.393 [2024-04-25 20:29:02.136070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.393 [2024-04-25 20:29:02.147628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.393 [2024-04-25 20:29:02.147653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.393 [2024-04-25 20:29:02.147663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.393 [2024-04-25 20:29:02.159648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.159671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.159685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.171589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.171612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.171622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.183935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.183958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.183968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.196127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.196149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.196159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.208143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.208167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.208178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.220669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.220695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.220706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.234892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.234915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.234924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.246998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.247022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.247032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.259284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.259307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.259316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.271669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.271692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.271701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.283687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.283710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.283720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.295899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.295923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.295932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.307961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.307985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.307996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.394 [2024-04-25 20:29:02.320055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.394 [2024-04-25 20:29:02.320078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.394 [2024-04-25 20:29:02.320088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.654 [2024-04-25 20:29:02.332204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.654 [2024-04-25 20:29:02.332235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.654 [2024-04-25 20:29:02.332245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.654 [2024-04-25 20:29:02.344564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.654 [2024-04-25 20:29:02.344587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.344598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.356714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.356738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.356748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.368899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.368928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.368941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.381017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.381041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.381051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.393172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.393195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.393204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.405225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.405249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.405259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.417389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.417413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.417423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.429530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.429553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.429563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.441537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.441563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.441573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.453850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.453874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.453883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.465849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.465873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.465882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.478257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.478280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.478290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.490152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.490175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.490184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.502591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.502615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.502625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.514545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.514568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.514578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.527022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.527048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.527058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.539746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.539770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.539780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.551438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.551462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.551472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.563462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.563486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.563500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.655 [2024-04-25 20:29:02.575859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.655 [2024-04-25 20:29:02.575882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.655 [2024-04-25 20:29:02.575896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.587765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.587790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.587799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.600173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.600197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.600207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.612365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.612389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.612399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.624551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.624575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.624585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.637320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.637344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.637354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.649293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.649317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.649327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.661451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.661473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.661483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.673420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.673444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.673453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.684990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.685016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.685026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.697425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.697449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.697459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.709950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.709974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.709983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.721892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.721915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.721924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.733903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.733926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.733935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.745708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.745736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.745746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.758447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.758473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.758482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.775010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.775036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.775046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.917 [2024-04-25 20:29:02.787002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.917 [2024-04-25 20:29:02.787027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.917 [2024-04-25 20:29:02.787041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.918 [2024-04-25 20:29:02.799218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.918 [2024-04-25 20:29:02.799242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.918 [2024-04-25 20:29:02.799252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.918 [2024-04-25 20:29:02.811240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.918 [2024-04-25 20:29:02.811263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.918 [2024-04-25 20:29:02.811272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.918 [2024-04-25 20:29:02.823259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.918 [2024-04-25 20:29:02.823283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.918 [2024-04-25 20:29:02.823293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.918 [2024-04-25 20:29:02.835424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.918 [2024-04-25 20:29:02.835448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.918 [2024-04-25 20:29:02.835458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.918 [2024-04-25 20:29:02.847364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:04.918 [2024-04-25 20:29:02.847389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.918 [2024-04-25 20:29:02.847399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.859757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.859781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.859791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.871937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.871962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.871973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.884070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.884094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.884104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.896057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.896085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.896095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.908216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.908240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.908250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.920436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.920460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.920470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.932620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.932643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.932652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.944855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.944879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.944889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.957083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.957106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.957116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.969265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.969289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.969299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.981680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.981704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.981714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:02.993604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:02.993629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:02.993643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:03.005598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:03.005622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:03.005632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:03.017573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:03.017606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:03.017616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:03.029888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:03.029913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:03.029923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:03.041879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:03.041903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:03.041913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:03.053857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:03.053880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:03.053890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:03.065694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:03.065718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:03.065728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:03.078090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:03.078113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:03.078123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:03.090230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:03.090255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:03.090265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.180 [2024-04-25 20:29:03.102172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.180 [2024-04-25 20:29:03.102203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.180 [2024-04-25 20:29:03.102213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.114178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.114204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.114214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.126053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.126076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.126086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.138205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.138228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.138238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.150171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.150195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.150204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.162126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.162150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.162160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.174367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.174391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.174400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.186425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.186448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.186458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.198603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.198626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.198640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.210788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.210811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.210821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.222936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.222959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.222969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.234932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.234955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.234965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.246879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.246902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.246912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.259063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.259088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.259098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.271299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.271323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.271332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.283290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.283314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.283323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.295283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.295307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.295317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.307763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.307792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.307802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.319277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.319301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.319310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.331645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.331668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.331678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.343831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.343855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.343864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.356036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.356059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.356069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.441 [2024-04-25 20:29:03.368217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.441 [2024-04-25 20:29:03.368251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.441 [2024-04-25 20:29:03.368260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.700 [2024-04-25 20:29:03.380418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.700 [2024-04-25 20:29:03.380443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.700 [2024-04-25 20:29:03.380452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.700 [2024-04-25 20:29:03.392828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.700 [2024-04-25 20:29:03.392851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.700 [2024-04-25 20:29:03.392861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.700 [2024-04-25 20:29:03.404723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.700 [2024-04-25 20:29:03.404747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.700 [2024-04-25 20:29:03.404768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.700 [2024-04-25 20:29:03.416756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.700 [2024-04-25 20:29:03.416784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.416796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.428838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.428864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.428874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.441502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.441527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.441536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.453495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.453519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.453529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.465440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.465467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.465477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.477634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.477658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.477668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.489285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.489308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.489317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.501791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.501817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.501827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.514348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.514377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.514387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.526659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.526682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.526691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.538392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.538415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.538424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.555240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.555264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.555273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.567210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.567233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.567242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.579282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.579305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.579314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.591670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.591695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.591705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.603900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.603924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.603934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.616187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.616211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.616221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.701 [2024-04-25 20:29:03.628328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.701 [2024-04-25 20:29:03.628351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.701 [2024-04-25 20:29:03.628361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.959 [2024-04-25 20:29:03.640482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.959 [2024-04-25 20:29:03.640509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.959 [2024-04-25 20:29:03.640518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.959 [2024-04-25 20:29:03.652657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:05.959 [2024-04-25 20:29:03.652680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.959 [2024-04-25 20:29:03.652690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.959 00:34:05.959 Latency(us) 00:34:05.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.959 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:05.959 nvme0n1 : 2.04 20442.81 79.85 0.00 0.00 6133.63 2276.51 46082.16 00:34:05.959 =================================================================================================================== 00:34:05.959 Total : 20442.81 79.85 0.00 0.00 6133.63 2276.51 46082.16 00:34:05.959 0 00:34:05.959 20:29:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:05.959 20:29:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:05.959 20:29:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:05.959 | .driver_specific 00:34:05.959 | .nvme_error 00:34:05.959 | .status_code 00:34:05.959 | .command_transient_transport_error' 00:34:05.959 20:29:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:05.959 20:29:03 -- host/digest.sh@71 -- # (( 163 > 0 )) 00:34:05.959 20:29:03 -- host/digest.sh@73 -- # killprocess 1766181 00:34:05.959 20:29:03 -- common/autotest_common.sh@926 -- # '[' -z 1766181 ']' 00:34:05.959 20:29:03 -- common/autotest_common.sh@930 -- # kill -0 1766181 00:34:05.959 20:29:03 -- common/autotest_common.sh@931 -- # uname 00:34:05.959 20:29:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:05.959 20:29:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1766181 00:34:06.217 20:29:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:06.217 20:29:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:06.217 20:29:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1766181' 00:34:06.217 killing process with pid 1766181 00:34:06.217 20:29:03 -- common/autotest_common.sh@945 -- # kill 1766181 00:34:06.217 Received shutdown signal, test time was about 2.000000 seconds 00:34:06.217 00:34:06.217 Latency(us) 00:34:06.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.217 =================================================================================================================== 00:34:06.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:06.217 20:29:03 -- common/autotest_common.sh@950 -- # wait 1766181 00:34:06.475 20:29:04 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:34:06.476 20:29:04 -- host/digest.sh@54 -- # local rw bs qd 00:34:06.476 20:29:04 -- host/digest.sh@56 -- # rw=randread 00:34:06.476 20:29:04 -- host/digest.sh@56 -- # bs=131072 00:34:06.476 20:29:04 -- host/digest.sh@56 -- # qd=16 00:34:06.476 20:29:04 -- host/digest.sh@58 -- # bperfpid=1767071 00:34:06.476 20:29:04 -- host/digest.sh@60 -- # waitforlisten 1767071 /var/tmp/bperf.sock 00:34:06.476 20:29:04 -- common/autotest_common.sh@819 -- # '[' -z 1767071 ']' 00:34:06.476 20:29:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:06.476 20:29:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:06.476 20:29:04 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:06.476 20:29:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:06.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:06.476 20:29:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:06.476 20:29:04 -- common/autotest_common.sh@10 -- # set +x 00:34:06.476 [2024-04-25 20:29:04.304060] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:06.476 [2024-04-25 20:29:04.304141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767071 ] 00:34:06.476 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:06.476 Zero copy mechanism will not be used. 00:34:06.476 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.476 [2024-04-25 20:29:04.387639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.734 [2024-04-25 20:29:04.482267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.304 20:29:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:07.304 20:29:05 -- common/autotest_common.sh@852 -- # return 0 00:34:07.304 20:29:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:07.304 20:29:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:07.304 20:29:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:07.304 20:29:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:07.304 20:29:05 -- common/autotest_common.sh@10 -- # set +x 00:34:07.304 20:29:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:07.304 20:29:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:07.304 20:29:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:07.564 nvme0n1 00:34:07.824 20:29:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:07.824 20:29:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:07.824 20:29:05 -- common/autotest_common.sh@10 -- # set +x 00:34:07.824 20:29:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:07.824 20:29:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:07.824 20:29:05 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:07.824 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:07.824 Zero copy mechanism will not be used. 00:34:07.824 Running I/O for 2 seconds... 00:34:07.824 [2024-04-25 20:29:05.597164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.597216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.597231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.604218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.604258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.604270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.611109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.611137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.611148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.618112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.618139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.618149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.624477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.624509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.624519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.630751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.630777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.630787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.637067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.637093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.637102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.643314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.643339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.643349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.649605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.649631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.649641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.655810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.655835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.655845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.662017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.662042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.662052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.668278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.668303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.668313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.674445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.674470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.674480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.680631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.680657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.680667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.686862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.824 [2024-04-25 20:29:05.686890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.824 [2024-04-25 20:29:05.686909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.824 [2024-04-25 20:29:05.693049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.693075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.693086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.699225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.699250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.699260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.705274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.705300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.705310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.711303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.711335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.711345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.717448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.717475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.717485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.723669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.723694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.723705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.729879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.729905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.729915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.736092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.736117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.736128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.742399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.742425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.742435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.749609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.749635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.749646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.825 [2024-04-25 20:29:05.755312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:07.825 [2024-04-25 20:29:05.755338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.825 [2024-04-25 20:29:05.755348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.087 [2024-04-25 20:29:05.761668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.087 [2024-04-25 20:29:05.761695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.087 [2024-04-25 20:29:05.761706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.087 [2024-04-25 20:29:05.767869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.087 [2024-04-25 20:29:05.767895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.087 [2024-04-25 20:29:05.767907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.087 [2024-04-25 20:29:05.774171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.087 [2024-04-25 20:29:05.774196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.087 [2024-04-25 20:29:05.774206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.087 [2024-04-25 20:29:05.780378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.087 [2024-04-25 20:29:05.780401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.087 [2024-04-25 20:29:05.780411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.087 [2024-04-25 20:29:05.786642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.087 [2024-04-25 20:29:05.786666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.087 [2024-04-25 20:29:05.786676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.087 [2024-04-25 20:29:05.792861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.087 [2024-04-25 20:29:05.792885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.087 [2024-04-25 20:29:05.792897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.087 [2024-04-25 20:29:05.799080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.087 [2024-04-25 20:29:05.799104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.087 [2024-04-25 20:29:05.799114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.087 [2024-04-25 20:29:05.805289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.087 [2024-04-25 20:29:05.805314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.087 [2024-04-25 20:29:05.805325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.087 [2024-04-25 20:29:05.811517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.087 [2024-04-25 20:29:05.811540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.087 [2024-04-25 20:29:05.811550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.817798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.817822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.817837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.824003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.824028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.824038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.830245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.830268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.830278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.836540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.836564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.836574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.842763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.842786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.842796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.848974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.848998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.849007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.855202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.855225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.855234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.861453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.861476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.861486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.867652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.867682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.867691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.873865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.873888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.873898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.880055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.880077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.880086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.886206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.886229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.886239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.892377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.892401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.892411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.898598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.898622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.898632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.904804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.904828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.904838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.911064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.911088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.911098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.917513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.917538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.917548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.923788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.923812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.923825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.929912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.929935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.929945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.936168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.936190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.936200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.942537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.942560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.942570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.948815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.948838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.948848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.954972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.955002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.955012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.961170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.961195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.961205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.967328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.967352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.967362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.973496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.088 [2024-04-25 20:29:05.973519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.088 [2024-04-25 20:29:05.973529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.088 [2024-04-25 20:29:05.979684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.089 [2024-04-25 20:29:05.979708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.089 [2024-04-25 20:29:05.979717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.089 [2024-04-25 20:29:05.985869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.089 [2024-04-25 20:29:05.985893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.089 [2024-04-25 20:29:05.985903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.089 [2024-04-25 20:29:05.992022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.089 [2024-04-25 20:29:05.992045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.089 [2024-04-25 20:29:05.992055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.089 [2024-04-25 20:29:05.998231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.089 [2024-04-25 20:29:05.998255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.089 [2024-04-25 20:29:05.998265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.089 [2024-04-25 20:29:06.004461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.089 [2024-04-25 20:29:06.004486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.089 [2024-04-25 20:29:06.004500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.089 [2024-04-25 20:29:06.010725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.089 [2024-04-25 20:29:06.010750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.089 [2024-04-25 20:29:06.010760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.089 [2024-04-25 20:29:06.016953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.089 [2024-04-25 20:29:06.016977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.089 [2024-04-25 20:29:06.016987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.023139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.023164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.023174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.029079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.029104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.029118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.035594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.035619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.035629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.041736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.041765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.041777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.047821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.047847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.047857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.054536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.054561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.054571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.061224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.061248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.061258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.067948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.067971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.067981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.074667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.074691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.074702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.080609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.080632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.080642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.087361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.087385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.087394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.092707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.092731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.092741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.098430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.098453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.098463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.103743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.103768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.103778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.110833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.110859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.110869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.115645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.115669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.115679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.119961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.119986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.119996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.123124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.123147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.123157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.128346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.128370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.128383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.133416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.133440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.133449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.137966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.137990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.138000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.143147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.143170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.143180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.148515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.352 [2024-04-25 20:29:06.148540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.352 [2024-04-25 20:29:06.148549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.352 [2024-04-25 20:29:06.153903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.153927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.153937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.159047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.159071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.159081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.163913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.163935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.163945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.168733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.168756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.168765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.174055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.174078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.174088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.179154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.179178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.179188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.184541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.184566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.184576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.189422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.189445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.189455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.194286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.194310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.194320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.198692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.198716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.198726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.203507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.203530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.203540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.208333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.208356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.208366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.213088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.213112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.213126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.218011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.218035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.218044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.222562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.222586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.222595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.226835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.226859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.226868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.230916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.230941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.230952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.235769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.235794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.235803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.238313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.238337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.238346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.242233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.242256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.242266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.246750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.246775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.246785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.251586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.251610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.251619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.256545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.256570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.256579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.261030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.261053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.261063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.265334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.265360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.265371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.269835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.269861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.353 [2024-04-25 20:29:06.269872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.353 [2024-04-25 20:29:06.274675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.353 [2024-04-25 20:29:06.274700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.354 [2024-04-25 20:29:06.274710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.354 [2024-04-25 20:29:06.279586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.354 [2024-04-25 20:29:06.279611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.354 [2024-04-25 20:29:06.279621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.284753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.284781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.284792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.289607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.289631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.289646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.294495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.294520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.294530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.299604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.299629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.299638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.303473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.303503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.303514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.307068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.307092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.307101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.310511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.310534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.310544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.314417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.314442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.314452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.318338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.318362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.318371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.322383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.322407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.322417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.327124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.327148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.327157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.332137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.332160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.332170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.336631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.336655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.336665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.341904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.341929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.341939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.347397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.347422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.347432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.352749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.352773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.352783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.358594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.358619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.358628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.364210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.364235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.364245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.369982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.370007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.370021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.376322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.376352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.376362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.383028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.383053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.383063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.389379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.389404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.389414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.394172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.617 [2024-04-25 20:29:06.394196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.617 [2024-04-25 20:29:06.394206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.617 [2024-04-25 20:29:06.399314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.399338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.399356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.404287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.404313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.404322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.409219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.409244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.409254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.414185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.414210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.414220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.418871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.418901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.418911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.423662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.423687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.423697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.428704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.428730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.428740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.433611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.433636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.433646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.438346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.438380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.438389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.443150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.443175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.443185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.448054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.448078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.448087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.452787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.452811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.452821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.457558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.457581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.457595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.462475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.462503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.462513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.467260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.467284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.467293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.472041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.472067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.472080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.476132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.476161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.476172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.480067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.480093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.480103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.484024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.484051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.484062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.487868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.487895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.487905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.491104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.491128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.491138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.494706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.494735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.494745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.498791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.498814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.498824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.503317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.503342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.503352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.508303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.508327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.508336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.513291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.618 [2024-04-25 20:29:06.513315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.618 [2024-04-25 20:29:06.513325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.618 [2024-04-25 20:29:06.518178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.619 [2024-04-25 20:29:06.518202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.619 [2024-04-25 20:29:06.518212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.619 [2024-04-25 20:29:06.522600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.619 [2024-04-25 20:29:06.522624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.619 [2024-04-25 20:29:06.522634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.619 [2024-04-25 20:29:06.526887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.619 [2024-04-25 20:29:06.526912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.619 [2024-04-25 20:29:06.526921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.619 [2024-04-25 20:29:06.529662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.619 [2024-04-25 20:29:06.529685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.619 [2024-04-25 20:29:06.529695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.619 [2024-04-25 20:29:06.534601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.619 [2024-04-25 20:29:06.534626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.619 [2024-04-25 20:29:06.534637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.619 [2024-04-25 20:29:06.539577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.619 [2024-04-25 20:29:06.539601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.619 [2024-04-25 20:29:06.539611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.619 [2024-04-25 20:29:06.544413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.619 [2024-04-25 20:29:06.544437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.619 [2024-04-25 20:29:06.544446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.549282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.549309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.549319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.553649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.553673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.553683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.558071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.558094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.558104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.562177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.562200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.562210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.566377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.566401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.566411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.571446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.571474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.571484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.576258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.576284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.576294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.581396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.581421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.581431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.586273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.586299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.586310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.591164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.591190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.591200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.596050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.596101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.596112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.601106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.601131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.601141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.605895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.605919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.605928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.610858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.610883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.881 [2024-04-25 20:29:06.610892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.881 [2024-04-25 20:29:06.615661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.881 [2024-04-25 20:29:06.615685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.615695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.620575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.620599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.620609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.625432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.625457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.625467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.630425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.630449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.630459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.634799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.634823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.634833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.638902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.638926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.638936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.642951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.642976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.642986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.647466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.647495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.647505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.652313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.652340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.652350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.657305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.657331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.657341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.662244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.662270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.662280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.667295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.667319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.667329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.672322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.672347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.672357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.677388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.677412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.677423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.682080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.682104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.682113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.686453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.686477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.686487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.690843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.690868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.690878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.694997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.695022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.695032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.698981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.699006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.699016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.703310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.703335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.703344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.707410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.707438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.707448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.711362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.711389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.711399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.715674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.715706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.715716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.720518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.720545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.720555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.725485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.725513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.725524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.730142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.730166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.730181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.734907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.882 [2024-04-25 20:29:06.734932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.882 [2024-04-25 20:29:06.734942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.882 [2024-04-25 20:29:06.739843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.739869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.739878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.744845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.744869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.744879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.749669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.749694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.749703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.754559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.754583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.754592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.759404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.759429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.759438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.764213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.764236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.764245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.768888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.768914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.768923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.773933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.773957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.773967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.778421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.778447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.778457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.782600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.782625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.782636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.787180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.787205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.787215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.792121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.792146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.792155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.797078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.797104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.797113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.802038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.802063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.802074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.883 [2024-04-25 20:29:06.807021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:08.883 [2024-04-25 20:29:06.807046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.883 [2024-04-25 20:29:06.807056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.811989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.812018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.812033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.816957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.816982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.822052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.822077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.822087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.827086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.827112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.827122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.832246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.832273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.832284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.837369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.837396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.837407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.842158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.842185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.842195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.846426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.846453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.846463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.850600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.850627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.850637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.855247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.855270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.855280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.860340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.860364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.860374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.865320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.865345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.865354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.870093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.870117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.870127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.874398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.874423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.874433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.877165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.877189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.877200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.881643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.881668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.881678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.886004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.886030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.886040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.890246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.890272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.890285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.894161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.894187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.894198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.898347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.898375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.898387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.903145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.903176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.144 [2024-04-25 20:29:06.903189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.144 [2024-04-25 20:29:06.907644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.144 [2024-04-25 20:29:06.907672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.907683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.912839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.912866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.912876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.918002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.918029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.918039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.922975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.923001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.923012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.927921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.927946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.927957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.933529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.933555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.933565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.937971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.937996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.938006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.942151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.942176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.942186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.946840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.946865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.946875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.951791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.951817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.951827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.956725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.956751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.956761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.961652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.961678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.961687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.966662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.966692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.966702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.971671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.971696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.971709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.976733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.976758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.976768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.981753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.981778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.981788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.985640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.985666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.985676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.989283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.989309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.989320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.993496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.993522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.993538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:06.997934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:06.997960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:06.997970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:07.003248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:07.003275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:07.003286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:07.009245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:07.009271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:07.009281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:07.015729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:07.015754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:07.015764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:07.022342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:07.022369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:07.022381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:07.028361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:07.028386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:07.028396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:07.033254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:07.033279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:07.033289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:07.038032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.145 [2024-04-25 20:29:07.038058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.145 [2024-04-25 20:29:07.038068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.145 [2024-04-25 20:29:07.043163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.146 [2024-04-25 20:29:07.043190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.146 [2024-04-25 20:29:07.043200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.146 [2024-04-25 20:29:07.048113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.146 [2024-04-25 20:29:07.048138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.146 [2024-04-25 20:29:07.048148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.146 [2024-04-25 20:29:07.053140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.146 [2024-04-25 20:29:07.053167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.146 [2024-04-25 20:29:07.053177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.146 [2024-04-25 20:29:07.057928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.146 [2024-04-25 20:29:07.057955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.146 [2024-04-25 20:29:07.057969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.146 [2024-04-25 20:29:07.062918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.146 [2024-04-25 20:29:07.062947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.146 [2024-04-25 20:29:07.062958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.146 [2024-04-25 20:29:07.068261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.146 [2024-04-25 20:29:07.068287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.146 [2024-04-25 20:29:07.068297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.405 [2024-04-25 20:29:07.074767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.405 [2024-04-25 20:29:07.074797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.405 [2024-04-25 20:29:07.074808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.405 [2024-04-25 20:29:07.081562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.405 [2024-04-25 20:29:07.081588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.405 [2024-04-25 20:29:07.081598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.405 [2024-04-25 20:29:07.087993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.405 [2024-04-25 20:29:07.088024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.405 [2024-04-25 20:29:07.088034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.094677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.094705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.094716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.101361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.101387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.101400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.108055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.108082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.108094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.114938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.114969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.114978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.121647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.121678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.121687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.128358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.128385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.128395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.135085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.135112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.135123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.141790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.141816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.141828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.148732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.148758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.148771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.155439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.155465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.155475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.162102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.162129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.162140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.168945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.168974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.168992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.175611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.175636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.175652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.182255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.182280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.182290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.188461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.188489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.188506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.194830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.194856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.194867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.201509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.201534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.201545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.208264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.208289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.208299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.214711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.214736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.214747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.218659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.218691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.218701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.222596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.222627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.222637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.226465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.226496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.226508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.230669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.230694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.230704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.235536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.235562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.235572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.240437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.240464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.240475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.244746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.406 [2024-04-25 20:29:07.244778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.406 [2024-04-25 20:29:07.244788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.406 [2024-04-25 20:29:07.249007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.249033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.249043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.253029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.253054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.253065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.257209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.257234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.257249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.260479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.260510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.260520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.265362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.265387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.265397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.270511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.270538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.270549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.275066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.275091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.275102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.279393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.279418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.279429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.283974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.283999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.284009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.288003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.288027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.288038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.292769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.292794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.292805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.297574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.297602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.297612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.302454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.302478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.302488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.307421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.307450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.307462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.312373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.312399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.312410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.317168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.317193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.317204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.322093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.322119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.322130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.326922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.326946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.326957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.407 [2024-04-25 20:29:07.331725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.407 [2024-04-25 20:29:07.331751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.407 [2024-04-25 20:29:07.331762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.336592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.336618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.336628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.341349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.341374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.341384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.346252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.346277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.346287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.350990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.351015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.351025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.355793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.355818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.355829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.360812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.360836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.360846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.365644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.365667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.365678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.370580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.370604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.370614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.375526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.375550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.375561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.668 [2024-04-25 20:29:07.380447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.668 [2024-04-25 20:29:07.380476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.668 [2024-04-25 20:29:07.380487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.385833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.385858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.385868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.391260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.391284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.391294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.396104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.396128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.396139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.400819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.400844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.400854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.405173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.405198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.405209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.410239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.410263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.410273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.415741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.415766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.415776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.421525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.421549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.421559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.426320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.426344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.426356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.431695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.431720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.431731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.437075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.437100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.437110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.442622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.442647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.442657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.448214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.448238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.448249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.453851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.453874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.453885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.459474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.459502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.459512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.464533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.464557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.464568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.469973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.470000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.470011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.475014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.475039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.475050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.480652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.480677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.480687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.485783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.485807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.485818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.491120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.491144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.491154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.497001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.497027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.497036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.502720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.502746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.502757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.508188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.508213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.508223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.513804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.513828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.513838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.519427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.519450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.519460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.669 [2024-04-25 20:29:07.525028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.669 [2024-04-25 20:29:07.525052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.669 [2024-04-25 20:29:07.525063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.530282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.530305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.530315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.535892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.535916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.535926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.543008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.543031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.543041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.548557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.548582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.548592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.553832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.553856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.553866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.559327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.559358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.559370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.566439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.566464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.566478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.572044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.572068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.572078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.577295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.577318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.577328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.670 [2024-04-25 20:29:07.582364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:09.670 [2024-04-25 20:29:07.582386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.670 [2024-04-25 20:29:07.582395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.670 00:34:09.670 Latency(us) 00:34:09.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.670 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:09.670 nvme0n1 : 2.00 5895.20 736.90 0.00 0.00 2711.53 515.23 9864.89 00:34:09.670 =================================================================================================================== 00:34:09.670 Total : 5895.20 736.90 0.00 0.00 2711.53 515.23 9864.89 00:34:09.670 0 00:34:09.931 20:29:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:09.931 20:29:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:09.931 20:29:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:09.931 | .driver_specific 00:34:09.931 | .nvme_error 00:34:09.931 | .status_code 00:34:09.931 | .command_transient_transport_error' 00:34:09.931 20:29:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:09.931 20:29:07 -- host/digest.sh@71 -- # (( 380 > 0 )) 00:34:09.931 20:29:07 -- host/digest.sh@73 -- # killprocess 1767071 00:34:09.931 20:29:07 -- common/autotest_common.sh@926 -- # '[' -z 1767071 ']' 00:34:09.931 20:29:07 -- common/autotest_common.sh@930 -- # kill -0 1767071 00:34:09.931 20:29:07 -- common/autotest_common.sh@931 -- # uname 00:34:09.931 20:29:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:09.931 20:29:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1767071 00:34:09.931 20:29:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:09.931 20:29:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:09.931 20:29:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1767071' 00:34:09.931 killing process with pid 1767071 00:34:09.931 20:29:07 -- common/autotest_common.sh@945 -- # kill 1767071 00:34:09.931 Received shutdown signal, test time was about 2.000000 seconds 00:34:09.931 00:34:09.931 Latency(us) 00:34:09.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.931 =================================================================================================================== 00:34:09.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:09.931 20:29:07 -- common/autotest_common.sh@950 -- # wait 1767071 00:34:10.512 20:29:08 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:34:10.512 20:29:08 -- host/digest.sh@54 -- # local rw bs qd 00:34:10.512 20:29:08 -- host/digest.sh@56 -- # rw=randwrite 00:34:10.512 20:29:08 -- host/digest.sh@56 -- # bs=4096 00:34:10.512 20:29:08 -- host/digest.sh@56 -- # qd=128 00:34:10.512 20:29:08 -- host/digest.sh@58 -- # bperfpid=1767693 00:34:10.512 20:29:08 -- host/digest.sh@60 -- # waitforlisten 1767693 /var/tmp/bperf.sock 00:34:10.512 20:29:08 -- common/autotest_common.sh@819 -- # '[' -z 1767693 ']' 00:34:10.512 20:29:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:10.512 20:29:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:10.512 20:29:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:10.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:10.512 20:29:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:10.512 20:29:08 -- common/autotest_common.sh@10 -- # set +x 00:34:10.512 20:29:08 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:10.512 [2024-04-25 20:29:08.216908] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:10.512 [2024-04-25 20:29:08.217057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767693 ] 00:34:10.512 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.512 [2024-04-25 20:29:08.347923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.802 [2024-04-25 20:29:08.444122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.060 20:29:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:11.060 20:29:08 -- common/autotest_common.sh@852 -- # return 0 00:34:11.060 20:29:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:11.060 20:29:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:11.318 20:29:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:11.318 20:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.318 20:29:09 -- common/autotest_common.sh@10 -- # set +x 00:34:11.318 20:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.318 20:29:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:11.318 20:29:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:11.577 nvme0n1 00:34:11.577 20:29:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:11.577 20:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.577 20:29:09 -- common/autotest_common.sh@10 -- # set +x 00:34:11.577 20:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.577 20:29:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:11.577 20:29:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:11.577 Running I/O for 2 seconds... 00:34:11.577 [2024-04-25 20:29:09.396129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:34:11.577 [2024-04-25 20:29:09.397317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.577 [2024-04-25 20:29:09.397360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:11.577 [2024-04-25 20:29:09.403637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6cc8 00:34:11.577 [2024-04-25 20:29:09.403947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.577 [2024-04-25 20:29:09.403980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:11.577 [2024-04-25 20:29:09.412428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:34:11.577 [2024-04-25 20:29:09.412711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.577 [2024-04-25 20:29:09.412736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.421219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8e88 00:34:11.578 [2024-04-25 20:29:09.421467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.421496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.430002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f92c0 00:34:11.578 [2024-04-25 20:29:09.430226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.430248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.438789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:34:11.578 [2024-04-25 20:29:09.438987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.439009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.447528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:34:11.578 [2024-04-25 20:29:09.447765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.447788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.457934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4298 00:34:11.578 [2024-04-25 20:29:09.459181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.459203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.466735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:34:11.578 [2024-04-25 20:29:09.467990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.468012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.474424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:34:11.578 [2024-04-25 20:29:09.475212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.475234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.483052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:34:11.578 [2024-04-25 20:29:09.484041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.484063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.491976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:34:11.578 [2024-04-25 20:29:09.492959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.492981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:11.578 [2024-04-25 20:29:09.500772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:34:11.578 [2024-04-25 20:29:09.501771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.578 [2024-04-25 20:29:09.501793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:11.837 [2024-04-25 20:29:09.509560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:34:11.837 [2024-04-25 20:29:09.510567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.837 [2024-04-25 20:29:09.510589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:11.837 [2024-04-25 20:29:09.518340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:34:11.837 [2024-04-25 20:29:09.519366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.837 [2024-04-25 20:29:09.519390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.837 [2024-04-25 20:29:09.526976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7c50 00:34:11.837 [2024-04-25 20:29:09.527692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.837 [2024-04-25 20:29:09.527715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:11.837 [2024-04-25 20:29:09.536270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:34:11.837 [2024-04-25 20:29:09.537435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.837 [2024-04-25 20:29:09.537457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.837 [2024-04-25 20:29:09.544824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.837 [2024-04-25 20:29:09.545237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.837 [2024-04-25 20:29:09.545259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.553881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.554090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.554116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.562945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.563152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.563173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.572190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.572396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.572419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.581271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.581478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.581503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.590330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.590540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.590563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.599396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.599605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.599626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.608461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.608672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.608694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.617535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.617742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.617764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.626585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.626791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.626812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.635639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.635853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.635875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.644711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.644920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.644941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.653769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.653976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.653997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.662808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.663016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.663037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.671888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.672095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.672116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.680942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.681149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.681170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.689990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.690198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.690221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.699043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.699250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.699271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.708099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.708306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.708327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.717298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.717511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.717533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.726353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.726565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.726586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.735411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.735623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.735644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.744459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.744670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.744691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.753529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.753738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.753759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:11.838 [2024-04-25 20:29:09.762593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:11.838 [2024-04-25 20:29:09.762799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.838 [2024-04-25 20:29:09.762821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.099 [2024-04-25 20:29:09.771650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.099 [2024-04-25 20:29:09.771858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.099 [2024-04-25 20:29:09.771879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.099 [2024-04-25 20:29:09.780782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.099 [2024-04-25 20:29:09.780988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.099 [2024-04-25 20:29:09.781009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.099 [2024-04-25 20:29:09.789858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.099 [2024-04-25 20:29:09.790066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.099 [2024-04-25 20:29:09.790090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.099 [2024-04-25 20:29:09.798912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.099 [2024-04-25 20:29:09.799119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.099 [2024-04-25 20:29:09.799140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.099 [2024-04-25 20:29:09.807977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.099 [2024-04-25 20:29:09.808181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.099 [2024-04-25 20:29:09.808202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.099 [2024-04-25 20:29:09.817039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.099 [2024-04-25 20:29:09.817244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.099 [2024-04-25 20:29:09.817263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.099 [2024-04-25 20:29:09.826094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.099 [2024-04-25 20:29:09.826300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.099 [2024-04-25 20:29:09.826320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.835133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.835341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.835360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.844200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.844406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.844425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.853253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.853458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.853478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.862308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.862518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.862538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.871359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.871570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.871590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.880410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.880623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.880643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.889463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.889676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.889696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.898512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.898718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.898737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.907549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.907755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.907774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.916841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.917053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.917073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.925890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.926098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.926118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.934931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.935136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.935155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.943989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.944197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.944220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.953047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.953254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.953274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.962110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.962318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.962338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.971175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.971381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.971402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.980239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.980446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.980465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.989310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.989519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.989538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:09.998379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:09.998589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:09.998609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.100 [2024-04-25 20:29:10.009473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.100 [2024-04-25 20:29:10.009738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.100 [2024-04-25 20:29:10.009768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.101 [2024-04-25 20:29:10.019338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.101 [2024-04-25 20:29:10.019548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.101 [2024-04-25 20:29:10.019569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.101 [2024-04-25 20:29:10.029413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.101 [2024-04-25 20:29:10.029679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.101 [2024-04-25 20:29:10.029703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.040386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.040606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.040628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.050386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.050615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.050637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.061072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.061309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.061333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.072152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.072394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.072420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.083299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.083542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.083565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.094167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.094402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.094424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.105220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.105459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.105481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.116190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.116428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.116452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.126886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.127110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.127133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.136761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.136966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.136988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.145840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.146044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.146067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.154913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.155118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.155141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.163993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.164196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.164218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.173071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.173277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.173299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.182150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.182353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.182375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.191236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.191438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.191460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.200367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.200581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.200604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.209463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.209674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.209696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.361 [2024-04-25 20:29:10.218561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.361 [2024-04-25 20:29:10.218768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.361 [2024-04-25 20:29:10.218790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.362 [2024-04-25 20:29:10.227646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.362 [2024-04-25 20:29:10.227850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.362 [2024-04-25 20:29:10.227873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.362 [2024-04-25 20:29:10.236730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.362 [2024-04-25 20:29:10.236935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.362 [2024-04-25 20:29:10.236957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.362 [2024-04-25 20:29:10.245806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.362 [2024-04-25 20:29:10.246011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.362 [2024-04-25 20:29:10.246031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.362 [2024-04-25 20:29:10.254931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.362 [2024-04-25 20:29:10.255135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.362 [2024-04-25 20:29:10.255156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.362 [2024-04-25 20:29:10.264026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.362 [2024-04-25 20:29:10.264230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.362 [2024-04-25 20:29:10.264250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.362 [2024-04-25 20:29:10.273108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.362 [2024-04-25 20:29:10.273313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.362 [2024-04-25 20:29:10.273334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.362 [2024-04-25 20:29:10.282206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.362 [2024-04-25 20:29:10.282397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.362 [2024-04-25 20:29:10.282417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.362 [2024-04-25 20:29:10.291305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.362 [2024-04-25 20:29:10.291502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.362 [2024-04-25 20:29:10.291523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.300454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.300657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.300678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.309558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.309750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.309770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.318638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.318832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.318852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.327733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.327925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.327944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.336814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.337004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.337024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.345903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.346094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.346114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.355014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.355209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.355233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.364108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.364302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.364322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.373190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.373382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.373402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.382271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.382465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.382487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.391351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.621 [2024-04-25 20:29:10.391549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.621 [2024-04-25 20:29:10.391569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.621 [2024-04-25 20:29:10.400441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.400642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.400661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.409553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.409746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.409770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.418649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.418840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.418861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.427747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.427939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.427959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.436845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.437045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.437065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.445943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.446137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.446157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.455041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.455242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.455263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.464146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.464338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.464358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.473234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.473427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.473448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.482316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.482514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.482533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.491421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.491623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.491643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.500531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.500723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.500743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.509634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.509827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.509852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.518719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.518910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.518931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.527816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.528008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.528028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.536890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.537079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.537099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.622 [2024-04-25 20:29:10.545969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.622 [2024-04-25 20:29:10.546164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.622 [2024-04-25 20:29:10.546186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.883 [2024-04-25 20:29:10.555056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.883 [2024-04-25 20:29:10.555249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.883 [2024-04-25 20:29:10.555270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.883 [2024-04-25 20:29:10.564136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.883 [2024-04-25 20:29:10.564328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.883 [2024-04-25 20:29:10.564348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.883 [2024-04-25 20:29:10.573207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.883 [2024-04-25 20:29:10.573396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.883 [2024-04-25 20:29:10.573416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.883 [2024-04-25 20:29:10.582363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.883 [2024-04-25 20:29:10.582559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.883 [2024-04-25 20:29:10.582579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.883 [2024-04-25 20:29:10.591445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.883 [2024-04-25 20:29:10.591642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.883 [2024-04-25 20:29:10.591663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.883 [2024-04-25 20:29:10.600525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.883 [2024-04-25 20:29:10.600716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.883 [2024-04-25 20:29:10.600736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.883 [2024-04-25 20:29:10.609642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.883 [2024-04-25 20:29:10.609833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.883 [2024-04-25 20:29:10.609854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.883 [2024-04-25 20:29:10.618728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.883 [2024-04-25 20:29:10.618919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.883 [2024-04-25 20:29:10.618938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.883 [2024-04-25 20:29:10.627809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.628001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.628022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.636872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.637062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.637081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.645925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.646116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.646136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.654975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.655166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.655186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.664038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.664226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.664246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.673084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.673274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.673295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.682159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.682348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.682368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.691206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.691395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.691415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.700268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.700460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.700480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.709322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.709517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.709538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.718397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.718588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.718608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.727458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.727651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.727671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.736523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.736715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.736734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.745726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.745919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.745941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.754785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.754977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.754997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.763859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.764049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.764069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.772920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.773113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.773133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.781983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.782176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.782196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.791036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.791226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.791246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.800102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.800292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.800313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:12.884 [2024-04-25 20:29:10.809158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:12.884 [2024-04-25 20:29:10.809347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.884 [2024-04-25 20:29:10.809367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.144 [2024-04-25 20:29:10.818302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.144 [2024-04-25 20:29:10.818499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.144 [2024-04-25 20:29:10.818519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.144 [2024-04-25 20:29:10.827381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.144 [2024-04-25 20:29:10.827578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.144 [2024-04-25 20:29:10.827598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.144 [2024-04-25 20:29:10.836460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.144 [2024-04-25 20:29:10.836655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.144 [2024-04-25 20:29:10.836675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.144 [2024-04-25 20:29:10.845540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.845731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.845751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.854608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.854801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.854821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.863712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.863919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.863939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.872796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.872987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.873007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.881865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.882055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.882078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.890934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.891127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.891149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.900003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.900192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.900217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.909061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.909253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.909276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.918116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.918308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.918330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.927402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.927596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.927619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.936471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.936666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.936689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.945527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.945719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.945740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.954583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.954773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.954795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.963628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.963817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.963839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.972686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.972876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.972898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.981746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.981940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.981961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.990790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:10.990982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:10.991003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:10.999865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:11.000054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:11.000075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:11.008908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:11.009097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:11.009119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:11.017960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:11.018150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:11.018171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:11.027042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:11.027231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:11.027253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:11.036099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:11.036288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:11.036310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:11.045144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:11.045333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:11.045354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:11.054207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:11.054396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:11.054417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:11.063239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:11.063432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:11.063452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.145 [2024-04-25 20:29:11.072301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.145 [2024-04-25 20:29:11.072497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.145 [2024-04-25 20:29:11.072518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.081354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.081549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.081571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.090417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.090611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.090632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.099468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.099662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.099683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.108517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.108708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.108728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.117560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.117750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.117771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.126607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.126796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.126817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.135691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.135882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.135908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.144741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.144932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.144954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.153803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.153994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.154016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.162860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.163051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.163072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.405 [2024-04-25 20:29:11.171906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.405 [2024-04-25 20:29:11.172095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.405 [2024-04-25 20:29:11.172116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.180968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.181157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.181179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.190027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.190221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.190242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.199081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.199273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.199294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.208160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.208350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.208372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.217229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.217421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.217446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.226292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.226482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.226507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.235348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.235540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.235563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.244423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.244618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.244639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.253504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.253698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.253721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.262559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.262749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.262770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.271617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.271805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.271826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.280658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.280848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.280869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.289712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.289902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.289926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.298779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.298971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.298993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.307838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.308030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.308051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.316917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.317106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.317127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.325971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.326160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.326181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.406 [2024-04-25 20:29:11.335017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.406 [2024-04-25 20:29:11.335207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.406 [2024-04-25 20:29:11.335229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.665 [2024-04-25 20:29:11.344087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.665 [2024-04-25 20:29:11.344277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.665 [2024-04-25 20:29:11.344299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.665 [2024-04-25 20:29:11.353142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.665 [2024-04-25 20:29:11.353333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.665 [2024-04-25 20:29:11.353354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.665 [2024-04-25 20:29:11.362192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.665 [2024-04-25 20:29:11.362381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.665 [2024-04-25 20:29:11.362403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.665 [2024-04-25 20:29:11.371245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.665 [2024-04-25 20:29:11.371440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.665 [2024-04-25 20:29:11.371461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.665 [2024-04-25 20:29:11.380305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:13.665 [2024-04-25 20:29:11.380501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.665 [2024-04-25 20:29:11.380522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:13.665 00:34:13.665 Latency(us) 00:34:13.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.665 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:13.665 nvme0n1 : 2.00 27835.47 108.73 0.00 0.00 4591.34 2198.91 11520.54 00:34:13.665 =================================================================================================================== 00:34:13.665 Total : 27835.47 108.73 0.00 0.00 4591.34 2198.91 11520.54 00:34:13.665 0 00:34:13.666 20:29:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:13.666 20:29:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:13.666 20:29:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:13.666 | .driver_specific 00:34:13.666 | .nvme_error 00:34:13.666 | .status_code 00:34:13.666 | .command_transient_transport_error' 00:34:13.666 20:29:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:13.666 20:29:11 -- host/digest.sh@71 -- # (( 218 > 0 )) 00:34:13.666 20:29:11 -- host/digest.sh@73 -- # killprocess 1767693 00:34:13.666 20:29:11 -- common/autotest_common.sh@926 -- # '[' -z 1767693 ']' 00:34:13.666 20:29:11 -- common/autotest_common.sh@930 -- # kill -0 1767693 00:34:13.666 20:29:11 -- common/autotest_common.sh@931 -- # uname 00:34:13.666 20:29:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:13.666 20:29:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1767693 00:34:13.666 20:29:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:13.666 20:29:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:13.666 20:29:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1767693' 00:34:13.666 killing process with pid 1767693 00:34:13.666 20:29:11 -- common/autotest_common.sh@945 -- # kill 1767693 00:34:13.666 Received shutdown signal, test time was about 2.000000 seconds 00:34:13.666 00:34:13.666 Latency(us) 00:34:13.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.666 =================================================================================================================== 00:34:13.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:13.666 20:29:11 -- common/autotest_common.sh@950 -- # wait 1767693 00:34:14.233 20:29:11 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:34:14.234 20:29:11 -- host/digest.sh@54 -- # local rw bs qd 00:34:14.234 20:29:11 -- host/digest.sh@56 -- # rw=randwrite 00:34:14.234 20:29:11 -- host/digest.sh@56 -- # bs=131072 00:34:14.234 20:29:11 -- host/digest.sh@56 -- # qd=16 00:34:14.234 20:29:11 -- host/digest.sh@58 -- # bperfpid=1768524 00:34:14.234 20:29:11 -- host/digest.sh@60 -- # waitforlisten 1768524 /var/tmp/bperf.sock 00:34:14.234 20:29:11 -- common/autotest_common.sh@819 -- # '[' -z 1768524 ']' 00:34:14.234 20:29:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:14.234 20:29:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:14.234 20:29:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:14.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:14.234 20:29:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:14.234 20:29:11 -- common/autotest_common.sh@10 -- # set +x 00:34:14.234 20:29:11 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:14.234 [2024-04-25 20:29:12.010228] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:14.234 [2024-04-25 20:29:12.010342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768524 ] 00:34:14.234 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:14.234 Zero copy mechanism will not be used. 00:34:14.234 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.234 [2024-04-25 20:29:12.121778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.494 [2024-04-25 20:29:12.215377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.066 20:29:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:15.066 20:29:12 -- common/autotest_common.sh@852 -- # return 0 00:34:15.066 20:29:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:15.066 20:29:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:15.066 20:29:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:15.066 20:29:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:15.066 20:29:12 -- common/autotest_common.sh@10 -- # set +x 00:34:15.066 20:29:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:15.066 20:29:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:15.066 20:29:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:15.325 nvme0n1 00:34:15.325 20:29:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:15.325 20:29:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:15.325 20:29:13 -- common/autotest_common.sh@10 -- # set +x 00:34:15.325 20:29:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:15.325 20:29:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:15.325 20:29:13 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:15.583 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:15.583 Zero copy mechanism will not be used. 00:34:15.583 Running I/O for 2 seconds... 00:34:15.583 [2024-04-25 20:29:13.303192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.583 [2024-04-25 20:29:13.303380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.583 [2024-04-25 20:29:13.303419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.583 [2024-04-25 20:29:13.310741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.583 [2024-04-25 20:29:13.311013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.583 [2024-04-25 20:29:13.311042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.583 [2024-04-25 20:29:13.319004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.583 [2024-04-25 20:29:13.319195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.583 [2024-04-25 20:29:13.319221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.583 [2024-04-25 20:29:13.326973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.583 [2024-04-25 20:29:13.327132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.583 [2024-04-25 20:29:13.327156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.583 [2024-04-25 20:29:13.333718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.583 [2024-04-25 20:29:13.333829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.583 [2024-04-25 20:29:13.333852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.583 [2024-04-25 20:29:13.340580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.583 [2024-04-25 20:29:13.340686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.583 [2024-04-25 20:29:13.340711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.583 [2024-04-25 20:29:13.347601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.583 [2024-04-25 20:29:13.347724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.583 [2024-04-25 20:29:13.347747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.583 [2024-04-25 20:29:13.354575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.583 [2024-04-25 20:29:13.354688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.354712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.361553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.361727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.361752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.368528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.368609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.368633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.375376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.375485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.375519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.382020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.382100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.382122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.388904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.389014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.389037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.395609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.395684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.395707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.402458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.402588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.402611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.409426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.409660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.409684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.416011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.416251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.416275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.422785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.422937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.422960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.429600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.429701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.429724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.436368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.436468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.436495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.443209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.443289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.443312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.449797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.449877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.449899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.456606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.456713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.456735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.463323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.463567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.463590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.469949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.470186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.470208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.476700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.476806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.476829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.483396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.483503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.483525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.490096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.490169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.490191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.496792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.496875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.496901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.503351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.503449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.503472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.584 [2024-04-25 20:29:13.510206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.584 [2024-04-25 20:29:13.510330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.584 [2024-04-25 20:29:13.510353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.517043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.517275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.517298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.523848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.524089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.524112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.530669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.530767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.530789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.537510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.537614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.537637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.544259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.544333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.544356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.551126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.551252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.551274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.557740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.557868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.557891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.564657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.564778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.564801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.571602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.571887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.571911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.578418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.578601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.578624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.585198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.843 [2024-04-25 20:29:13.585294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.843 [2024-04-25 20:29:13.585316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.843 [2024-04-25 20:29:13.592021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.592135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.592159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.598676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.598754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.598776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.605440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.605569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.605591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.612284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.612401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.612430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.619135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.619252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.619275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.626024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.626255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.626278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.632686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.632874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.632897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.639368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.639466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.639494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.645856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.645970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.645993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.652669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.652809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.652833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.659414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.659583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.659605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.666028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.666125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.666147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.672904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.673037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.673060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.679760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.679971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.679995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.686408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.686589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.686617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.693222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.693351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.693376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.700042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.700145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.700168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.706863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.706979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.707001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.713713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.713793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.713816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.720370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.720467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.720495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.727081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.727199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.727223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.734086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.734322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.734345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.740780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.740897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.740920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.747418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.747521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.747544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.754312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.754388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.754412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.761285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.761447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.761469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.844 [2024-04-25 20:29:13.767968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:15.844 [2024-04-25 20:29:13.768066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.844 [2024-04-25 20:29:13.768089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.774839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.774962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.774984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.781815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.781948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.781971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.788563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.788787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.788810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.795273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.795368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.795390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.802074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.802177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.802200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.808769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.808884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.808905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.815604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.815727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.815749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.822444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.822547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.822569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.829295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.829382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.829405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.836000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.836117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.836139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.843046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.843282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.843305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.849800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.849956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.849978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.856540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.856636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.856660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.863331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.863428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.863450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.870099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.870197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.870220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.876907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.877063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.877086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.883684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.883803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.883825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.890554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.890703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.890725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.897282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.897521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.897544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.904113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.904263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.904290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.911003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.911107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.104 [2024-04-25 20:29:13.911129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.104 [2024-04-25 20:29:13.917860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.104 [2024-04-25 20:29:13.917947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.917970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.924537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.924640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.924661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.931363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.931439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.931460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.938106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.938250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.938274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.945015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.945137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.945160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.951792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.952018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.952041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.958432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.958616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.958640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.965274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.965395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.965417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.971964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.972050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.972073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.976408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.976475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.976503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.980243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.980352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.980373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.983903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.983983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.984005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.987710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.987858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.987879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.991524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.991682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.991705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.995323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.995477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.995505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:13.999055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:13.999142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:13.999170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:14.002655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:14.002747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:14.002771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:14.007121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:14.007228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:14.007251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:14.011211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:14.011276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:14.011298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:14.015211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:14.015298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:14.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:14.020650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:14.020755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:14.020777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:14.024448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:14.024611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:14.024633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:14.028109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:14.028231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:14.028253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.105 [2024-04-25 20:29:14.031681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.105 [2024-04-25 20:29:14.031766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.105 [2024-04-25 20:29:14.031788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.364 [2024-04-25 20:29:14.035159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.364 [2024-04-25 20:29:14.035269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.035291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.038744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.038818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.038840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.042330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.042434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.042456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.045905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.046007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.046028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.049636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.049740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.049762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.054367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.054507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.054529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.058275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.058438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.058461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.062780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.062883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.062905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.066479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.066565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.066588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.070163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.070269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.070293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.073847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.073919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.073942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.077362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.077458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.077480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.081173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.081287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.081310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.084714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.084865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.084887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.088298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.088384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.088406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.091661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.091777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.091799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.095188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.095267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.095289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.098802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.098915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.098936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.102397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.102483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.102514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.106071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.106166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.106190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.110564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.110661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.110684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.115543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.115703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.115726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.119219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.119335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.119357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.122795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.122916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.122938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.126527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.126614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.126636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.130149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.130220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.130243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.133676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.133755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.365 [2024-04-25 20:29:14.133778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.365 [2024-04-25 20:29:14.137346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.365 [2024-04-25 20:29:14.137429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.137451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.140892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.141042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.141063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.144449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.144609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.144631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.147969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.148087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.148109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.151370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.151485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.151511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.155088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.155178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.155201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.158655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.158778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.158799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.162285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.162361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.162387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.166508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.166640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.166663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.170323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.170462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.170485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.175093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.175219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.175242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.179361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.179467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.179494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.182809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.182880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.182903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.186250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.186333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.186355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.189664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.189744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.189767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.193193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.193299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.193320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.196708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.196803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.196825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.200231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.200339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.200361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.203918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.204042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.204064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.207359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.207467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.207488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.211048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.211124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.211146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.214682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.214780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.214808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.218242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.218359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.218383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.221931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.222020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.222042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.225703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.225792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.225817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.229325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.229443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.229465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.232926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.233069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.366 [2024-04-25 20:29:14.233091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.366 [2024-04-25 20:29:14.236497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.366 [2024-04-25 20:29:14.236584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.236607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.240059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.240193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.240216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.243771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.243864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.243886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.247384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.247466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.247489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.250864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.250928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.250950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.254488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.254606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.254627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.257934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.258048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.258070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.261575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.261717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.261740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.264976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.265100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.265121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.268528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.268623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.268644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.272004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.272092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.272114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.275559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.275682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.275703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.279156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.279241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.279264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.282729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.282835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.282857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.286463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.286571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.286594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.290129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.290264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.290286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.367 [2024-04-25 20:29:14.293724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.367 [2024-04-25 20:29:14.293851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.367 [2024-04-25 20:29:14.293874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.297171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.297258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.297280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.300614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.300707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.300730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.304009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.304096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.304116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.307438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.307513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.307537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.311052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.311201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.311224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.314594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.314704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.314725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.318398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.318523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.318546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.321997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.322102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.322126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.325653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.325735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.325758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.329188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.329277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.629 [2024-04-25 20:29:14.332711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.629 [2024-04-25 20:29:14.332837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.629 [2024-04-25 20:29:14.332860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.336192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.336265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.336288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.339750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.339912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.339933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.343255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.343379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.343401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.346837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.346968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.346991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.350190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.350295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.350317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.353735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.353826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.353846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.357248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.357340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.357360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.360866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.360946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.360966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.364475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.364552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.364573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.368117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.368281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.368301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.371740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.371860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.371882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.375350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.375506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.375527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.378993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.379095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.379119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.382551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.382626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.382646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.386098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.386231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.386252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.389610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.389688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.389707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.393058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.393125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.393145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.396798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.396928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.396949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.400346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.400466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.400489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.404084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.404236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.404257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.407817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.407964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.407985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.412014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.412130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.412152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.415838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.415946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.415966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.421174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.421289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.421311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.424911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.424975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.424996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.428525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.428670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.428692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.432020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.630 [2024-04-25 20:29:14.432134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.630 [2024-04-25 20:29:14.432155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.630 [2024-04-25 20:29:14.435440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.435599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.435619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.438944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.439062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.439085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.442464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.442556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.442582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.446021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.446140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.446163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.449584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.449678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.449700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.453166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.453235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.453257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.456947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.457089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.457110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.460352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.460436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.460459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.464029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.464156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.464178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.467602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.467707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.467729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.471299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.471417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.471439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.474809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.474912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.474932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.478302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.478418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.478441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.481716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.481821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.481841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.485402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.485571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.485591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.488802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.488882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.488902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.492495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.492661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.492682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.495970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.496113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.496133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.499635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.499734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.499754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.503257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.503357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.503377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.506764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.506848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.506868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.510320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.510442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.513946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.514086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.514106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.517509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.517602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.517622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.521054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.521193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.521214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.524601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.524722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.524743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.631 [2024-04-25 20:29:14.528213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.631 [2024-04-25 20:29:14.528323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.631 [2024-04-25 20:29:14.528342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.632 [2024-04-25 20:29:14.531760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.632 [2024-04-25 20:29:14.531849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.632 [2024-04-25 20:29:14.531869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.632 [2024-04-25 20:29:14.535377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.632 [2024-04-25 20:29:14.535461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.632 [2024-04-25 20:29:14.535488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.632 [2024-04-25 20:29:14.539271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.632 [2024-04-25 20:29:14.539352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.632 [2024-04-25 20:29:14.539372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.632 [2024-04-25 20:29:14.543579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.632 [2024-04-25 20:29:14.543761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.632 [2024-04-25 20:29:14.543781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.632 [2024-04-25 20:29:14.548118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.632 [2024-04-25 20:29:14.548287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.632 [2024-04-25 20:29:14.548307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.632 [2024-04-25 20:29:14.554127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.632 [2024-04-25 20:29:14.554297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.632 [2024-04-25 20:29:14.554318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.632 [2024-04-25 20:29:14.558472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.632 [2024-04-25 20:29:14.558650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.632 [2024-04-25 20:29:14.558671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.894 [2024-04-25 20:29:14.562933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.894 [2024-04-25 20:29:14.563113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.894 [2024-04-25 20:29:14.563133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.894 [2024-04-25 20:29:14.567528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.894 [2024-04-25 20:29:14.567657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.894 [2024-04-25 20:29:14.567680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.894 [2024-04-25 20:29:14.571839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.894 [2024-04-25 20:29:14.571965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.894 [2024-04-25 20:29:14.571986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.894 [2024-04-25 20:29:14.576187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.894 [2024-04-25 20:29:14.576293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.894 [2024-04-25 20:29:14.576313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.894 [2024-04-25 20:29:14.580710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.894 [2024-04-25 20:29:14.580863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.894 [2024-04-25 20:29:14.580884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.894 [2024-04-25 20:29:14.584920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.894 [2024-04-25 20:29:14.585052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.894 [2024-04-25 20:29:14.585074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.894 [2024-04-25 20:29:14.588560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.894 [2024-04-25 20:29:14.588653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.894 [2024-04-25 20:29:14.588673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.894 [2024-04-25 20:29:14.592321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.894 [2024-04-25 20:29:14.592458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.894 [2024-04-25 20:29:14.592478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.597388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.597502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.597523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.601550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.601653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.601674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.605275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.605369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.605391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.608957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.609035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.609058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.612669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.612776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.612797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.616186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.616316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.616337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.619845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.619996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.620017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.623434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.623571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.623595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.626923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.627070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.627092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.630602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.630702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.630724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.634232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.634293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.634313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.637942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.638016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.638036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.641734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.641875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.641896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.645402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.645477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.645503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.649141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.649295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.649318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.652538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.652678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.652699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.656109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.656250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.656275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.659514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.659630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.659652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.663149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.663216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.663237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.666815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.666911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.666934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.671058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.671242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.671271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.675896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.676026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.676050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.679985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.680184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.680206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.685043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.685175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.685197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.689409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.689587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.689609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.695238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.695391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.695413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.895 [2024-04-25 20:29:14.701074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.895 [2024-04-25 20:29:14.701234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.895 [2024-04-25 20:29:14.701256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.707294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.707504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.707529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.713852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.714018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.714040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.718641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.718770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.718793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.722810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.723032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.723055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.727037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.727179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.727200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.731425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.731536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.731557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.736114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.736221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.736244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.739842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.739933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.739954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.743498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.743567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.743591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.747060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.747176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.747197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.750651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.750732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.750755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.754307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.754458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.754481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.757831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.757925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.757950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.761409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.761510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.761532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.764821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.764896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.764917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.768558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.768640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.768664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.772018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.772099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.772120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.775656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.775783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.775805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.779256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.779336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.779360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.782811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.782968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.782993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.786450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.786569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.786589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.790137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.790227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.790251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.793650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.793744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.793763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.797209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.797344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.797365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.800780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.800858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.800878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.804424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.804578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.804599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.896 [2024-04-25 20:29:14.807909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.896 [2024-04-25 20:29:14.808000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.896 [2024-04-25 20:29:14.808021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.897 [2024-04-25 20:29:14.811535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.897 [2024-04-25 20:29:14.811680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.897 [2024-04-25 20:29:14.811703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.897 [2024-04-25 20:29:14.815088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.897 [2024-04-25 20:29:14.815182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.897 [2024-04-25 20:29:14.815203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.897 [2024-04-25 20:29:14.818699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.897 [2024-04-25 20:29:14.818796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.897 [2024-04-25 20:29:14.818816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.897 [2024-04-25 20:29:14.822353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:16.897 [2024-04-25 20:29:14.822488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.897 [2024-04-25 20:29:14.822515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.158 [2024-04-25 20:29:14.825968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.158 [2024-04-25 20:29:14.826044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.158 [2024-04-25 20:29:14.826065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.158 [2024-04-25 20:29:14.829345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.158 [2024-04-25 20:29:14.829424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.158 [2024-04-25 20:29:14.829447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.158 [2024-04-25 20:29:14.832974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.158 [2024-04-25 20:29:14.833087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.158 [2024-04-25 20:29:14.833111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.158 [2024-04-25 20:29:14.836654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.158 [2024-04-25 20:29:14.836769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.158 [2024-04-25 20:29:14.836790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.158 [2024-04-25 20:29:14.840425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.158 [2024-04-25 20:29:14.840580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.158 [2024-04-25 20:29:14.840602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.158 [2024-04-25 20:29:14.843980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.158 [2024-04-25 20:29:14.844076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.158 [2024-04-25 20:29:14.844101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.158 [2024-04-25 20:29:14.847521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.158 [2024-04-25 20:29:14.847612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.158 [2024-04-25 20:29:14.847633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.158 [2024-04-25 20:29:14.851102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.158 [2024-04-25 20:29:14.851200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.158 [2024-04-25 20:29:14.851221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.158 [2024-04-25 20:29:14.854607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.854739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.854760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.858261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.858348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.858369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.861811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.861935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.861957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.865396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.865486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.865514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.869169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.869306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.869328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.872729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.872836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.872857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.876301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.876455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.876476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.879774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.879883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.879904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.883377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.883453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.883474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.887035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.887135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.887155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.890535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.890637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.890656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.893969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.894064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.894085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.897606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.897754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.897776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.901368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.901485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.901511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.904995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.905125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.905147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.908652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.908776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.908798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.912141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.912205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.912225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.915824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.915952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.915974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.919450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.919550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.919572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.923100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.923186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.923207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.926647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.926799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.926822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.930289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.930422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.930447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.933912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.934056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.934078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.937574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.937669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.937691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.941002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.941091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.941111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.944555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.944661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.944684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.947927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.159 [2024-04-25 20:29:14.947997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.159 [2024-04-25 20:29:14.948020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.159 [2024-04-25 20:29:14.951457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.951546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.951568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.954926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.955084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.955107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.958594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.958759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.958783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.962175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.962316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.962339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.965791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.965889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.965910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.969480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.969606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.969628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.973208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.973332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.973355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.976763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.976857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.976880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.980373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.980500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.980523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.984005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.984141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.984163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.987395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.987514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.987537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.990917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.991080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.991099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.994372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.994460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.994481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:14.997841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:14.997908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:14.997932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.001452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.001613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.001634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.005060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.005155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.005176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.008628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.008727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.008747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.013267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.013433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.013455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.016809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.016943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.016965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.022177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.022317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.022341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.025862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.025984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.026005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.029348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.029437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.029458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.033082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.033223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.033246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.036574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.036652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.036673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.040084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.040156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.040176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.043769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.043897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.043919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.047382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.047525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.160 [2024-04-25 20:29:15.047548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.160 [2024-04-25 20:29:15.051009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.160 [2024-04-25 20:29:15.051174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.051195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.054580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.054700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.054721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.058161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.058250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.058270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.061845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.061950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.061975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.065348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.065421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.065442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.068917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.068997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.069017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.072580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.072708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.072730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.076073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.076202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.076224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.079588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.079715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.079736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.082984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.083100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.083119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.161 [2024-04-25 20:29:15.086691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.161 [2024-04-25 20:29:15.086796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.161 [2024-04-25 20:29:15.086820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.091232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.091378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.091406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.095141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.095222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.095243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.100284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.100387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.100408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.104031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.104181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.104203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.107511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.107642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.107664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.111216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.111362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.111383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.114905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.115026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.115048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.118529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.118597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.118617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.122111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.122250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.122272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.125631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.125749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.125772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.129161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.129227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.129248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.132884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.133028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.133050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.136536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.136632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.136656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.140244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.140382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.140411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.143794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.143903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.143923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.147185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.147270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.147290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.150771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.150879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.150901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.154332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.154425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.154446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.157986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.158087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.158113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.161692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.161822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.161843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.165260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.165381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.165402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.168914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.169031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.169055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.172522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.172623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.172644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.176123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.176202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.176225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.179861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.179978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.180000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.183487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.183567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.183588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.187098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.187182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.187203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.190785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.190914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.190937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.194233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.194356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.194379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.197753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.197884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.197905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.201115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.201240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.201262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.204755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.204824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.204845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.208285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.208392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.208414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.211836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.211940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.211960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.215456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.215541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.215563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.219114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.219255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.219281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.222583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.222701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.222722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.226217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.226393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.226414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.229802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.229944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.229966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.234140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.234271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.234292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.237800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.237907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.237929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.242952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.243013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.243034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.246810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.246871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.246892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.250510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.250639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.250661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.254063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.254173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.254193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.257634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.257790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.257812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.261267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.261361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.423 [2024-04-25 20:29:15.261382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.423 [2024-04-25 20:29:15.265221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.423 [2024-04-25 20:29:15.265328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.424 [2024-04-25 20:29:15.265349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.424 [2024-04-25 20:29:15.270409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.424 [2024-04-25 20:29:15.270642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.424 [2024-04-25 20:29:15.270663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.424 [2024-04-25 20:29:15.276312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.424 [2024-04-25 20:29:15.276414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.424 [2024-04-25 20:29:15.276435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:17.424 [2024-04-25 20:29:15.281845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.424 [2024-04-25 20:29:15.282011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.424 [2024-04-25 20:29:15.282032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:17.424 [2024-04-25 20:29:15.288408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:34:17.424 [2024-04-25 20:29:15.288576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:17.424 [2024-04-25 20:29:15.288603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:17.424 00:34:17.424 Latency(us) 00:34:17.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.424 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:17.424 nvme0n1 : 2.00 6937.82 867.23 0.00 0.00 2301.94 1465.94 11934.45 00:34:17.424 =================================================================================================================== 00:34:17.424 Total : 6937.82 867.23 0.00 0.00 2301.94 1465.94 11934.45 00:34:17.424 0 00:34:17.424 20:29:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:17.424 20:29:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:17.424 20:29:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:17.424 20:29:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:17.424 | .driver_specific 00:34:17.424 | .nvme_error 00:34:17.424 | .status_code 00:34:17.424 | .command_transient_transport_error' 00:34:17.683 20:29:15 -- host/digest.sh@71 -- # (( 448 > 0 )) 00:34:17.683 20:29:15 -- host/digest.sh@73 -- # killprocess 1768524 00:34:17.683 20:29:15 -- common/autotest_common.sh@926 -- # '[' -z 1768524 ']' 00:34:17.683 20:29:15 -- common/autotest_common.sh@930 -- # kill -0 1768524 00:34:17.683 20:29:15 -- common/autotest_common.sh@931 -- # uname 00:34:17.683 20:29:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:17.683 20:29:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1768524 00:34:17.683 20:29:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:17.683 20:29:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:17.683 20:29:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1768524' 00:34:17.683 killing process with pid 1768524 00:34:17.683 20:29:15 -- common/autotest_common.sh@945 -- # kill 1768524 00:34:17.683 Received shutdown signal, test time was about 2.000000 seconds 00:34:17.683 00:34:17.683 Latency(us) 00:34:17.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.683 =================================================================================================================== 00:34:17.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:17.683 20:29:15 -- common/autotest_common.sh@950 -- # wait 1768524 00:34:17.941 20:29:15 -- host/digest.sh@115 -- # killprocess 1766145 00:34:17.941 20:29:15 -- common/autotest_common.sh@926 -- # '[' -z 1766145 ']' 00:34:17.941 20:29:15 -- common/autotest_common.sh@930 -- # kill -0 1766145 00:34:17.941 20:29:15 -- common/autotest_common.sh@931 -- # uname 00:34:17.941 20:29:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:17.941 20:29:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1766145 00:34:18.199 20:29:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:18.199 20:29:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:18.199 20:29:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1766145' 00:34:18.199 killing process with pid 1766145 00:34:18.199 20:29:15 -- common/autotest_common.sh@945 -- # kill 1766145 00:34:18.199 20:29:15 -- common/autotest_common.sh@950 -- # wait 1766145 00:34:18.457 00:34:18.457 real 0m16.852s 00:34:18.457 user 0m32.216s 00:34:18.457 sys 0m3.441s 00:34:18.457 20:29:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:18.457 20:29:16 -- common/autotest_common.sh@10 -- # set +x 00:34:18.457 ************************************ 00:34:18.457 END TEST nvmf_digest_error 00:34:18.457 ************************************ 00:34:18.457 20:29:16 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:34:18.457 20:29:16 -- host/digest.sh@139 -- # nvmftestfini 00:34:18.457 20:29:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:18.457 20:29:16 -- nvmf/common.sh@116 -- # sync 00:34:18.457 20:29:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:18.457 20:29:16 -- nvmf/common.sh@119 -- # set +e 00:34:18.457 20:29:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:18.457 20:29:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:18.458 rmmod nvme_tcp 00:34:18.718 rmmod nvme_fabrics 00:34:18.718 rmmod nvme_keyring 00:34:18.718 20:29:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:18.718 20:29:16 -- nvmf/common.sh@123 -- # set -e 00:34:18.718 20:29:16 -- nvmf/common.sh@124 -- # return 0 00:34:18.718 20:29:16 -- nvmf/common.sh@477 -- # '[' -n 1766145 ']' 00:34:18.718 20:29:16 -- nvmf/common.sh@478 -- # killprocess 1766145 00:34:18.718 20:29:16 -- common/autotest_common.sh@926 -- # '[' -z 1766145 ']' 00:34:18.718 20:29:16 -- common/autotest_common.sh@930 -- # kill -0 1766145 00:34:18.718 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1766145) - No such process 00:34:18.718 20:29:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1766145 is not found' 00:34:18.718 Process with pid 1766145 is not found 00:34:18.718 20:29:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:18.718 20:29:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:18.718 20:29:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:18.718 20:29:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:18.718 20:29:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:18.718 20:29:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.718 20:29:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:18.718 20:29:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.626 20:29:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:20.626 00:34:20.626 real 1m6.212s 00:34:20.626 user 1m35.418s 00:34:20.626 sys 0m11.317s 00:34:20.626 20:29:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:20.626 20:29:18 -- common/autotest_common.sh@10 -- # set +x 00:34:20.626 ************************************ 00:34:20.626 END TEST nvmf_digest 00:34:20.626 ************************************ 00:34:20.626 20:29:18 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:34:20.626 20:29:18 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:34:20.626 20:29:18 -- nvmf/nvmf.sh@119 -- # [[ phy-fallback == phy ]] 00:34:20.626 20:29:18 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:20.626 20:29:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:20.626 20:29:18 -- common/autotest_common.sh@10 -- # set +x 00:34:20.626 20:29:18 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:20.626 00:34:20.626 real 21m11.857s 00:34:20.626 user 58m12.823s 00:34:20.626 sys 4m39.596s 00:34:20.626 20:29:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:20.626 20:29:18 -- common/autotest_common.sh@10 -- # set +x 00:34:20.626 ************************************ 00:34:20.626 END TEST nvmf_tcp 00:34:20.626 ************************************ 00:34:20.886 20:29:18 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:34:20.886 20:29:18 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:20.886 20:29:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:20.886 20:29:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:20.886 20:29:18 -- common/autotest_common.sh@10 -- # set +x 00:34:20.886 ************************************ 00:34:20.886 START TEST spdkcli_nvmf_tcp 00:34:20.886 ************************************ 00:34:20.886 20:29:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:20.886 * Looking for test storage... 00:34:20.886 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:34:20.886 20:29:18 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:34:20.886 20:29:18 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:20.886 20:29:18 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:34:20.886 20:29:18 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.886 20:29:18 -- nvmf/common.sh@7 -- # uname -s 00:34:20.886 20:29:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.886 20:29:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.886 20:29:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.886 20:29:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.886 20:29:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.886 20:29:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.886 20:29:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.886 20:29:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.886 20:29:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.886 20:29:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.887 20:29:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:20.887 20:29:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:20.887 20:29:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.887 20:29:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.887 20:29:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:20.887 20:29:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:34:20.887 20:29:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.887 20:29:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.887 20:29:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.887 20:29:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.887 20:29:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.887 20:29:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.887 20:29:18 -- paths/export.sh@5 -- # export PATH 00:34:20.887 20:29:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.887 20:29:18 -- nvmf/common.sh@46 -- # : 0 00:34:20.887 20:29:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:20.887 20:29:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:20.887 20:29:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:20.887 20:29:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.887 20:29:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.887 20:29:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:20.887 20:29:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:20.887 20:29:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:20.887 20:29:18 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:20.887 20:29:18 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:20.887 20:29:18 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:20.887 20:29:18 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:20.887 20:29:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:20.887 20:29:18 -- common/autotest_common.sh@10 -- # set +x 00:34:20.887 20:29:18 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:20.887 20:29:18 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1769932 00:34:20.887 20:29:18 -- spdkcli/common.sh@34 -- # waitforlisten 1769932 00:34:20.887 20:29:18 -- common/autotest_common.sh@819 -- # '[' -z 1769932 ']' 00:34:20.887 20:29:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.887 20:29:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:20.887 20:29:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.887 20:29:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:20.887 20:29:18 -- common/autotest_common.sh@10 -- # set +x 00:34:20.887 20:29:18 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:20.887 [2024-04-25 20:29:18.735917] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:20.887 [2024-04-25 20:29:18.736032] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769932 ] 00:34:20.887 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.148 [2024-04-25 20:29:18.849194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:21.148 [2024-04-25 20:29:18.945859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:21.148 [2024-04-25 20:29:18.946086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.148 [2024-04-25 20:29:18.946094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:21.718 20:29:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:21.718 20:29:19 -- common/autotest_common.sh@852 -- # return 0 00:34:21.718 20:29:19 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:21.718 20:29:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:21.718 20:29:19 -- common/autotest_common.sh@10 -- # set +x 00:34:21.718 20:29:19 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:21.718 20:29:19 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:21.718 20:29:19 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:21.718 20:29:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:21.718 20:29:19 -- common/autotest_common.sh@10 -- # set +x 00:34:21.718 20:29:19 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:21.718 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:21.718 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:21.718 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:21.718 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:21.718 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:21.718 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:21.718 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:21.718 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:21.718 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:21.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:21.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:21.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:21.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:21.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:21.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:21.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:21.719 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:21.719 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:21.719 ' 00:34:21.977 [2024-04-25 20:29:19.786961] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:24.511 [2024-04-25 20:29:21.841325] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.078 [2024-04-25 20:29:23.003223] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:27.619 [2024-04-25 20:29:25.134225] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:29.528 [2024-04-25 20:29:26.964899] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:30.467 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:30.467 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:30.467 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:30.467 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:30.467 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:30.467 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:30.467 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:30.467 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:30.467 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:30.467 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:30.467 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:30.467 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:30.467 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:30.468 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:30.468 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:30.468 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:30.729 20:29:28 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:30.729 20:29:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:30.729 20:29:28 -- common/autotest_common.sh@10 -- # set +x 00:34:30.729 20:29:28 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:30.729 20:29:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:30.729 20:29:28 -- common/autotest_common.sh@10 -- # set +x 00:34:30.729 20:29:28 -- spdkcli/nvmf.sh@69 -- # check_match 00:34:30.729 20:29:28 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:30.988 20:29:28 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:30.988 20:29:28 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:30.988 20:29:28 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:30.988 20:29:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:30.988 20:29:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.246 20:29:28 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:31.246 20:29:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:31.246 20:29:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.246 20:29:28 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:31.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:31.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:31.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:31.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:31.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:31.246 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:31.246 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:31.246 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:31.246 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:31.246 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:31.246 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:31.246 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:31.246 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:31.246 ' 00:34:36.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:36.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:36.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:36.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:36.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:36.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:36.546 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:36.546 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:36.546 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:36.546 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:36.546 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:36.546 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:36.546 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:36.546 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:36.546 20:29:33 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:36.546 20:29:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:36.546 20:29:33 -- common/autotest_common.sh@10 -- # set +x 00:34:36.546 20:29:33 -- spdkcli/nvmf.sh@90 -- # killprocess 1769932 00:34:36.546 20:29:33 -- common/autotest_common.sh@926 -- # '[' -z 1769932 ']' 00:34:36.546 20:29:33 -- common/autotest_common.sh@930 -- # kill -0 1769932 00:34:36.546 20:29:33 -- common/autotest_common.sh@931 -- # uname 00:34:36.546 20:29:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:36.546 20:29:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1769932 00:34:36.546 20:29:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:36.546 20:29:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:36.546 20:29:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1769932' 00:34:36.546 killing process with pid 1769932 00:34:36.546 20:29:33 -- common/autotest_common.sh@945 -- # kill 1769932 00:34:36.546 [2024-04-25 20:29:33.948839] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:36.546 20:29:33 -- common/autotest_common.sh@950 -- # wait 1769932 00:34:36.546 20:29:34 -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:36.546 20:29:34 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:36.546 20:29:34 -- spdkcli/common.sh@13 -- # '[' -n 1769932 ']' 00:34:36.546 20:29:34 -- spdkcli/common.sh@14 -- # killprocess 1769932 00:34:36.546 20:29:34 -- common/autotest_common.sh@926 -- # '[' -z 1769932 ']' 00:34:36.546 20:29:34 -- common/autotest_common.sh@930 -- # kill -0 1769932 00:34:36.546 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1769932) - No such process 00:34:36.546 20:29:34 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1769932 is not found' 00:34:36.546 Process with pid 1769932 is not found 00:34:36.546 20:29:34 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:36.546 20:29:34 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:36.546 20:29:34 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:36.546 00:34:36.546 real 0m15.831s 00:34:36.546 user 0m32.024s 00:34:36.546 sys 0m0.737s 00:34:36.546 20:29:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.546 20:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:36.546 ************************************ 00:34:36.546 END TEST spdkcli_nvmf_tcp 00:34:36.546 ************************************ 00:34:36.546 20:29:34 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:36.546 20:29:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:36.546 20:29:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:36.546 20:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:36.546 ************************************ 00:34:36.546 START TEST nvmf_identify_passthru 00:34:36.546 ************************************ 00:34:36.546 20:29:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:36.806 * Looking for test storage... 00:34:36.806 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:34:36.806 20:29:34 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:34:36.806 20:29:34 -- nvmf/common.sh@7 -- # uname -s 00:34:36.806 20:29:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:36.806 20:29:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:36.806 20:29:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:36.806 20:29:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:36.806 20:29:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:36.806 20:29:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:36.806 20:29:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:36.806 20:29:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:36.806 20:29:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:36.806 20:29:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:36.806 20:29:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:36.806 20:29:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:36.806 20:29:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:36.806 20:29:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:36.806 20:29:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:36.806 20:29:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:34:36.806 20:29:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.806 20:29:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.806 20:29:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.806 20:29:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.806 20:29:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.806 20:29:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.806 20:29:34 -- paths/export.sh@5 -- # export PATH 00:34:36.806 20:29:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.806 20:29:34 -- nvmf/common.sh@46 -- # : 0 00:34:36.806 20:29:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:36.806 20:29:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:36.806 20:29:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:36.806 20:29:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:36.806 20:29:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:36.806 20:29:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:36.806 20:29:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:36.806 20:29:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:36.806 20:29:34 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:34:36.806 20:29:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.806 20:29:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.806 20:29:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.806 20:29:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.806 20:29:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.806 20:29:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.806 20:29:34 -- paths/export.sh@5 -- # export PATH 00:34:36.806 20:29:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.806 20:29:34 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:36.806 20:29:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:36.806 20:29:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:36.806 20:29:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:36.806 20:29:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:36.806 20:29:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:36.806 20:29:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.806 20:29:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:36.806 20:29:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.806 20:29:34 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:34:36.806 20:29:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:36.806 20:29:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:36.806 20:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:42.079 20:29:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:42.079 20:29:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:42.079 20:29:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:42.079 20:29:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:42.079 20:29:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:42.079 20:29:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:42.079 20:29:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:42.079 20:29:39 -- nvmf/common.sh@294 -- # net_devs=() 00:34:42.079 20:29:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:42.079 20:29:39 -- nvmf/common.sh@295 -- # e810=() 00:34:42.079 20:29:39 -- nvmf/common.sh@295 -- # local -ga e810 00:34:42.079 20:29:39 -- nvmf/common.sh@296 -- # x722=() 00:34:42.079 20:29:39 -- nvmf/common.sh@296 -- # local -ga x722 00:34:42.079 20:29:39 -- nvmf/common.sh@297 -- # mlx=() 00:34:42.079 20:29:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:42.079 20:29:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:42.079 20:29:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:42.079 20:29:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:42.079 20:29:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:42.079 20:29:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:34:42.079 Found 0000:27:00.0 (0x8086 - 0x159b) 00:34:42.079 20:29:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:42.079 20:29:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:34:42.079 Found 0000:27:00.1 (0x8086 - 0x159b) 00:34:42.079 20:29:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:42.079 20:29:39 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:42.079 20:29:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.079 20:29:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:42.079 20:29:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.079 20:29:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:34:42.079 Found net devices under 0000:27:00.0: cvl_0_0 00:34:42.079 20:29:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.079 20:29:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:42.079 20:29:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.079 20:29:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:42.079 20:29:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.079 20:29:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:34:42.079 Found net devices under 0000:27:00.1: cvl_0_1 00:34:42.079 20:29:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.079 20:29:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:42.079 20:29:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:42.079 20:29:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:42.079 20:29:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:42.079 20:29:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:42.080 20:29:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:42.080 20:29:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:42.080 20:29:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:42.080 20:29:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:42.080 20:29:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:42.080 20:29:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:42.080 20:29:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:42.080 20:29:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:42.080 20:29:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:42.080 20:29:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:42.080 20:29:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:42.080 20:29:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:42.080 20:29:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:42.080 20:29:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:42.080 20:29:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:42.080 20:29:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:42.080 20:29:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:42.080 20:29:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:42.080 20:29:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:42.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:42.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:34:42.080 00:34:42.080 --- 10.0.0.2 ping statistics --- 00:34:42.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.080 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:34:42.080 20:29:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:42.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:42.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:34:42.080 00:34:42.080 --- 10.0.0.1 ping statistics --- 00:34:42.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.080 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:34:42.080 20:29:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:42.080 20:29:39 -- nvmf/common.sh@410 -- # return 0 00:34:42.080 20:29:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:34:42.080 20:29:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:42.080 20:29:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:42.080 20:29:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:42.080 20:29:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:42.080 20:29:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:42.080 20:29:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:42.080 20:29:39 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:42.080 20:29:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:42.080 20:29:39 -- common/autotest_common.sh@10 -- # set +x 00:34:42.080 20:29:39 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:42.080 20:29:39 -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:42.080 20:29:39 -- common/autotest_common.sh@1509 -- # local bdfs 00:34:42.080 20:29:39 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:42.080 20:29:39 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:42.080 20:29:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:42.080 20:29:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:34:42.080 20:29:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:42.080 20:29:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:42.080 20:29:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:42.080 20:29:39 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:34:42.080 20:29:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:34:42.080 20:29:39 -- common/autotest_common.sh@1512 -- # echo 0000:03:00.0 00:34:42.080 20:29:39 -- target/identify_passthru.sh@16 -- # bdf=0000:03:00.0 00:34:42.080 20:29:39 -- target/identify_passthru.sh@17 -- # '[' -z 0000:03:00.0 ']' 00:34:42.080 20:29:39 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:42.080 20:29:39 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:34:42.080 20:29:39 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:42.080 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.467 20:29:40 -- target/identify_passthru.sh@23 -- # nvme_serial_number=233442AA2262 00:34:43.467 20:29:40 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:34:43.467 20:29:40 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:43.467 20:29:40 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:43.467 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.406 20:29:42 -- target/identify_passthru.sh@24 -- # nvme_model_number=Micron_7450_MTFDKBA960TFR 00:34:44.406 20:29:42 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:44.406 20:29:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:44.406 20:29:42 -- common/autotest_common.sh@10 -- # set +x 00:34:44.406 20:29:42 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:44.406 20:29:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:44.406 20:29:42 -- common/autotest_common.sh@10 -- # set +x 00:34:44.406 20:29:42 -- target/identify_passthru.sh@31 -- # nvmfpid=1776897 00:34:44.406 20:29:42 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:44.406 20:29:42 -- target/identify_passthru.sh@35 -- # waitforlisten 1776897 00:34:44.406 20:29:42 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:44.406 20:29:42 -- common/autotest_common.sh@819 -- # '[' -z 1776897 ']' 00:34:44.406 20:29:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.406 20:29:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:44.406 20:29:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.406 20:29:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:44.406 20:29:42 -- common/autotest_common.sh@10 -- # set +x 00:34:44.406 [2024-04-25 20:29:42.309382] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:44.406 [2024-04-25 20:29:42.309525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.667 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.667 [2024-04-25 20:29:42.437192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:44.667 [2024-04-25 20:29:42.536402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:44.667 [2024-04-25 20:29:42.536594] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.667 [2024-04-25 20:29:42.536609] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.667 [2024-04-25 20:29:42.536619] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.667 [2024-04-25 20:29:42.536774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.667 [2024-04-25 20:29:42.536870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:44.667 [2024-04-25 20:29:42.536972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.667 [2024-04-25 20:29:42.536983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:45.236 20:29:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:45.236 20:29:43 -- common/autotest_common.sh@852 -- # return 0 00:34:45.236 20:29:43 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:45.236 20:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.236 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.236 INFO: Log level set to 20 00:34:45.236 INFO: Requests: 00:34:45.236 { 00:34:45.236 "jsonrpc": "2.0", 00:34:45.236 "method": "nvmf_set_config", 00:34:45.236 "id": 1, 00:34:45.236 "params": { 00:34:45.236 "admin_cmd_passthru": { 00:34:45.236 "identify_ctrlr": true 00:34:45.236 } 00:34:45.236 } 00:34:45.236 } 00:34:45.236 00:34:45.236 INFO: response: 00:34:45.236 { 00:34:45.236 "jsonrpc": "2.0", 00:34:45.236 "id": 1, 00:34:45.236 "result": true 00:34:45.236 } 00:34:45.236 00:34:45.236 20:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.236 20:29:43 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:45.236 20:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.236 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.236 INFO: Setting log level to 20 00:34:45.236 INFO: Setting log level to 20 00:34:45.236 INFO: Log level set to 20 00:34:45.236 INFO: Log level set to 20 00:34:45.236 INFO: Requests: 00:34:45.236 { 00:34:45.236 "jsonrpc": "2.0", 00:34:45.236 "method": "framework_start_init", 00:34:45.236 "id": 1 00:34:45.236 } 00:34:45.236 00:34:45.236 INFO: Requests: 00:34:45.236 { 00:34:45.236 "jsonrpc": "2.0", 00:34:45.236 "method": "framework_start_init", 00:34:45.236 "id": 1 00:34:45.236 } 00:34:45.236 00:34:45.494 [2024-04-25 20:29:43.192715] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:45.494 INFO: response: 00:34:45.494 { 00:34:45.494 "jsonrpc": "2.0", 00:34:45.494 "id": 1, 00:34:45.494 "result": true 00:34:45.494 } 00:34:45.494 00:34:45.494 INFO: response: 00:34:45.494 { 00:34:45.494 "jsonrpc": "2.0", 00:34:45.494 "id": 1, 00:34:45.494 "result": true 00:34:45.494 } 00:34:45.494 00:34:45.494 20:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.494 20:29:43 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:45.494 20:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.494 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.494 INFO: Setting log level to 40 00:34:45.494 INFO: Setting log level to 40 00:34:45.494 INFO: Setting log level to 40 00:34:45.494 [2024-04-25 20:29:43.206758] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.494 20:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.494 20:29:43 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:45.494 20:29:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:45.494 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.494 20:29:43 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:03:00.0 00:34:45.494 20:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.494 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.752 Nvme0n1 00:34:45.752 20:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.752 20:29:43 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:45.752 20:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.752 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.752 20:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.752 20:29:43 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:45.752 20:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.752 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.752 20:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.752 20:29:43 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:45.752 20:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.752 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.752 [2024-04-25 20:29:43.645141] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.752 20:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.752 20:29:43 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:45.752 20:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.752 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:34:45.752 [2024-04-25 20:29:43.652870] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:45.753 [ 00:34:45.753 { 00:34:45.753 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:45.753 "subtype": "Discovery", 00:34:45.753 "listen_addresses": [], 00:34:45.753 "allow_any_host": true, 00:34:45.753 "hosts": [] 00:34:45.753 }, 00:34:45.753 { 00:34:45.753 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:45.753 "subtype": "NVMe", 00:34:45.753 "listen_addresses": [ 00:34:45.753 { 00:34:45.753 "transport": "TCP", 00:34:45.753 "trtype": "TCP", 00:34:45.753 "adrfam": "IPv4", 00:34:45.753 "traddr": "10.0.0.2", 00:34:45.753 "trsvcid": "4420" 00:34:45.753 } 00:34:45.753 ], 00:34:45.753 "allow_any_host": true, 00:34:45.753 "hosts": [], 00:34:45.753 "serial_number": "SPDK00000000000001", 00:34:45.753 "model_number": "SPDK bdev Controller", 00:34:45.753 "max_namespaces": 1, 00:34:45.753 "min_cntlid": 1, 00:34:45.753 "max_cntlid": 65519, 00:34:45.753 "namespaces": [ 00:34:45.753 { 00:34:45.753 "nsid": 1, 00:34:45.753 "bdev_name": "Nvme0n1", 00:34:45.753 "name": "Nvme0n1", 00:34:45.753 "nguid": "000000000000000100A0752342AA2262", 00:34:45.753 "uuid": "00000000-0000-0001-00a0-752342aa2262" 00:34:45.753 } 00:34:45.753 ] 00:34:45.753 } 00:34:45.753 ] 00:34:45.753 20:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.753 20:29:43 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:45.753 20:29:43 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:45.753 20:29:43 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:46.010 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.010 20:29:43 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=233442AA2262 00:34:46.010 20:29:43 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:46.010 20:29:43 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:46.010 20:29:43 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:46.010 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.268 20:29:44 -- target/identify_passthru.sh@61 -- # nvmf_model_number=Micron_7450_MTFDKBA960TFR 00:34:46.268 20:29:44 -- target/identify_passthru.sh@63 -- # '[' 233442AA2262 '!=' 233442AA2262 ']' 00:34:46.268 20:29:44 -- target/identify_passthru.sh@68 -- # '[' Micron_7450_MTFDKBA960TFR '!=' Micron_7450_MTFDKBA960TFR ']' 00:34:46.268 20:29:44 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:46.268 20:29:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:46.268 20:29:44 -- common/autotest_common.sh@10 -- # set +x 00:34:46.268 20:29:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:46.268 20:29:44 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:46.268 20:29:44 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:46.268 20:29:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:46.268 20:29:44 -- nvmf/common.sh@116 -- # sync 00:34:46.268 20:29:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:46.268 20:29:44 -- nvmf/common.sh@119 -- # set +e 00:34:46.268 20:29:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:46.268 20:29:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:46.268 rmmod nvme_tcp 00:34:46.540 rmmod nvme_fabrics 00:34:46.540 rmmod nvme_keyring 00:34:46.540 20:29:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:46.540 20:29:44 -- nvmf/common.sh@123 -- # set -e 00:34:46.540 20:29:44 -- nvmf/common.sh@124 -- # return 0 00:34:46.540 20:29:44 -- nvmf/common.sh@477 -- # '[' -n 1776897 ']' 00:34:46.540 20:29:44 -- nvmf/common.sh@478 -- # killprocess 1776897 00:34:46.540 20:29:44 -- common/autotest_common.sh@926 -- # '[' -z 1776897 ']' 00:34:46.540 20:29:44 -- common/autotest_common.sh@930 -- # kill -0 1776897 00:34:46.540 20:29:44 -- common/autotest_common.sh@931 -- # uname 00:34:46.540 20:29:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:46.540 20:29:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1776897 00:34:46.540 20:29:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:46.540 20:29:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:46.540 20:29:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1776897' 00:34:46.540 killing process with pid 1776897 00:34:46.540 20:29:44 -- common/autotest_common.sh@945 -- # kill 1776897 00:34:46.540 [2024-04-25 20:29:44.284566] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:46.540 20:29:44 -- common/autotest_common.sh@950 -- # wait 1776897 00:34:47.919 20:29:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:47.919 20:29:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:47.919 20:29:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:47.919 20:29:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:47.919 20:29:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:47.919 20:29:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.919 20:29:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:47.919 20:29:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.822 20:29:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:49.822 00:34:49.822 real 0m13.112s 00:34:49.822 user 0m14.394s 00:34:49.822 sys 0m4.608s 00:34:49.822 20:29:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:49.822 20:29:47 -- common/autotest_common.sh@10 -- # set +x 00:34:49.822 ************************************ 00:34:49.822 END TEST nvmf_identify_passthru 00:34:49.822 ************************************ 00:34:49.822 20:29:47 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:49.822 20:29:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:49.822 20:29:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:49.822 20:29:47 -- common/autotest_common.sh@10 -- # set +x 00:34:49.822 ************************************ 00:34:49.822 START TEST nvmf_dif 00:34:49.822 ************************************ 00:34:49.822 20:29:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:49.822 * Looking for test storage... 00:34:49.822 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:34:49.822 20:29:47 -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.822 20:29:47 -- nvmf/common.sh@7 -- # uname -s 00:34:49.822 20:29:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.822 20:29:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.822 20:29:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.822 20:29:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.822 20:29:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.822 20:29:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.822 20:29:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.822 20:29:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.822 20:29:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.822 20:29:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.822 20:29:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:49.822 20:29:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:49.822 20:29:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.822 20:29:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.823 20:29:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:49.823 20:29:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:34:49.823 20:29:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.823 20:29:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.823 20:29:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.823 20:29:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.823 20:29:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.823 20:29:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.823 20:29:47 -- paths/export.sh@5 -- # export PATH 00:34:49.823 20:29:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.823 20:29:47 -- nvmf/common.sh@46 -- # : 0 00:34:49.823 20:29:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:49.823 20:29:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:49.823 20:29:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:49.823 20:29:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.823 20:29:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.823 20:29:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:49.823 20:29:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:49.823 20:29:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:49.823 20:29:47 -- target/dif.sh@15 -- # NULL_META=16 00:34:49.823 20:29:47 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:49.823 20:29:47 -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:49.823 20:29:47 -- target/dif.sh@15 -- # NULL_DIF=1 00:34:49.823 20:29:47 -- target/dif.sh@135 -- # nvmftestinit 00:34:49.823 20:29:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:49.823 20:29:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.823 20:29:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:49.823 20:29:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:49.823 20:29:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:49.823 20:29:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.823 20:29:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:49.823 20:29:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.823 20:29:47 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:34:49.823 20:29:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:49.823 20:29:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:49.823 20:29:47 -- common/autotest_common.sh@10 -- # set +x 00:34:55.103 20:29:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:55.103 20:29:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:55.103 20:29:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:55.103 20:29:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:55.103 20:29:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:55.103 20:29:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:55.103 20:29:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:55.103 20:29:52 -- nvmf/common.sh@294 -- # net_devs=() 00:34:55.103 20:29:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:55.103 20:29:52 -- nvmf/common.sh@295 -- # e810=() 00:34:55.103 20:29:52 -- nvmf/common.sh@295 -- # local -ga e810 00:34:55.103 20:29:52 -- nvmf/common.sh@296 -- # x722=() 00:34:55.103 20:29:52 -- nvmf/common.sh@296 -- # local -ga x722 00:34:55.103 20:29:52 -- nvmf/common.sh@297 -- # mlx=() 00:34:55.103 20:29:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:55.103 20:29:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:55.103 20:29:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:55.103 20:29:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:55.103 20:29:52 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:34:55.103 20:29:52 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:34:55.103 20:29:52 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:34:55.103 20:29:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:55.103 20:29:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:55.104 20:29:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:34:55.104 Found 0000:27:00.0 (0x8086 - 0x159b) 00:34:55.104 20:29:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:55.104 20:29:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:34:55.104 Found 0000:27:00.1 (0x8086 - 0x159b) 00:34:55.104 20:29:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:55.104 20:29:52 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:55.104 20:29:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.104 20:29:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:55.104 20:29:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.104 20:29:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:34:55.104 Found net devices under 0000:27:00.0: cvl_0_0 00:34:55.104 20:29:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.104 20:29:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:55.104 20:29:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.104 20:29:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:55.104 20:29:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.104 20:29:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:34:55.104 Found net devices under 0000:27:00.1: cvl_0_1 00:34:55.104 20:29:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.104 20:29:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:55.104 20:29:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:55.104 20:29:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:55.104 20:29:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:55.104 20:29:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.104 20:29:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:55.104 20:29:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:55.104 20:29:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:55.104 20:29:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:55.104 20:29:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:55.104 20:29:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:55.104 20:29:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:55.104 20:29:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.104 20:29:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:55.104 20:29:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:55.104 20:29:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:55.104 20:29:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.104 20:29:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.104 20:29:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.104 20:29:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:55.104 20:29:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.104 20:29:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.104 20:29:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.104 20:29:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:55.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:34:55.104 00:34:55.104 --- 10.0.0.2 ping statistics --- 00:34:55.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.104 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:34:55.104 20:29:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:34:55.104 00:34:55.104 --- 10.0.0.1 ping statistics --- 00:34:55.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.104 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:34:55.104 20:29:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.104 20:29:53 -- nvmf/common.sh@410 -- # return 0 00:34:55.104 20:29:53 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:55.104 20:29:53 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:34:57.637 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:57.637 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:34:57.637 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:57.637 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:57.637 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:57.637 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:57.637 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:57.637 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:57.637 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:57.637 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:57.637 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:57.637 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:57.637 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:57.637 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:57.637 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:57.637 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:57.637 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:57.637 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:34:57.895 20:29:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:57.895 20:29:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:57.895 20:29:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:57.895 20:29:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:57.895 20:29:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:57.895 20:29:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:57.895 20:29:55 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:57.895 20:29:55 -- target/dif.sh@137 -- # nvmfappstart 00:34:57.895 20:29:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:57.895 20:29:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:57.895 20:29:55 -- common/autotest_common.sh@10 -- # set +x 00:34:57.895 20:29:55 -- nvmf/common.sh@469 -- # nvmfpid=1782893 00:34:57.895 20:29:55 -- nvmf/common.sh@470 -- # waitforlisten 1782893 00:34:57.895 20:29:55 -- common/autotest_common.sh@819 -- # '[' -z 1782893 ']' 00:34:57.895 20:29:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:57.895 20:29:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:57.895 20:29:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:57.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:57.895 20:29:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:57.895 20:29:55 -- common/autotest_common.sh@10 -- # set +x 00:34:57.895 20:29:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:57.895 [2024-04-25 20:29:55.716389] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:57.895 [2024-04-25 20:29:55.716456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:57.895 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.895 [2024-04-25 20:29:55.804165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.153 [2024-04-25 20:29:55.894114] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:58.153 [2024-04-25 20:29:55.894272] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:58.153 [2024-04-25 20:29:55.894284] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:58.153 [2024-04-25 20:29:55.894293] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:58.153 [2024-04-25 20:29:55.894318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.723 20:29:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:58.723 20:29:56 -- common/autotest_common.sh@852 -- # return 0 00:34:58.723 20:29:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:58.723 20:29:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:58.723 20:29:56 -- common/autotest_common.sh@10 -- # set +x 00:34:58.723 20:29:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:58.723 20:29:56 -- target/dif.sh@139 -- # create_transport 00:34:58.723 20:29:56 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:58.723 20:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.723 20:29:56 -- common/autotest_common.sh@10 -- # set +x 00:34:58.723 [2024-04-25 20:29:56.448933] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:58.723 20:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.723 20:29:56 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:58.723 20:29:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:58.723 20:29:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:58.723 20:29:56 -- common/autotest_common.sh@10 -- # set +x 00:34:58.723 ************************************ 00:34:58.723 START TEST fio_dif_1_default 00:34:58.723 ************************************ 00:34:58.723 20:29:56 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:34:58.723 20:29:56 -- target/dif.sh@86 -- # create_subsystems 0 00:34:58.724 20:29:56 -- target/dif.sh@28 -- # local sub 00:34:58.724 20:29:56 -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.724 20:29:56 -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.724 20:29:56 -- target/dif.sh@18 -- # local sub_id=0 00:34:58.724 20:29:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:58.724 20:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.724 20:29:56 -- common/autotest_common.sh@10 -- # set +x 00:34:58.724 bdev_null0 00:34:58.724 20:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.724 20:29:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.724 20:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.724 20:29:56 -- common/autotest_common.sh@10 -- # set +x 00:34:58.724 20:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.724 20:29:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.724 20:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.724 20:29:56 -- common/autotest_common.sh@10 -- # set +x 00:34:58.724 20:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.724 20:29:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.724 20:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.724 20:29:56 -- common/autotest_common.sh@10 -- # set +x 00:34:58.724 [2024-04-25 20:29:56.489094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.724 20:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.724 20:29:56 -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:58.724 20:29:56 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:58.724 20:29:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:58.724 20:29:56 -- nvmf/common.sh@520 -- # config=() 00:34:58.724 20:29:56 -- nvmf/common.sh@520 -- # local subsystem config 00:34:58.724 20:29:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:58.724 20:29:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.724 20:29:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:58.724 { 00:34:58.724 "params": { 00:34:58.724 "name": "Nvme$subsystem", 00:34:58.724 "trtype": "$TEST_TRANSPORT", 00:34:58.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.724 "adrfam": "ipv4", 00:34:58.724 "trsvcid": "$NVMF_PORT", 00:34:58.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.724 "hdgst": ${hdgst:-false}, 00:34:58.724 "ddgst": ${ddgst:-false} 00:34:58.724 }, 00:34:58.724 "method": "bdev_nvme_attach_controller" 00:34:58.724 } 00:34:58.724 EOF 00:34:58.724 )") 00:34:58.724 20:29:56 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.724 20:29:56 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:58.724 20:29:56 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.724 20:29:56 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:58.724 20:29:56 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.724 20:29:56 -- common/autotest_common.sh@1320 -- # shift 00:34:58.724 20:29:56 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:58.724 20:29:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.724 20:29:56 -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.724 20:29:56 -- target/dif.sh@54 -- # local file 00:34:58.724 20:29:56 -- target/dif.sh@56 -- # cat 00:34:58.724 20:29:56 -- nvmf/common.sh@542 -- # cat 00:34:58.724 20:29:56 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.724 20:29:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.724 20:29:56 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:58.724 20:29:56 -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.724 20:29:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:58.724 20:29:56 -- nvmf/common.sh@544 -- # jq . 00:34:58.724 20:29:56 -- nvmf/common.sh@545 -- # IFS=, 00:34:58.724 20:29:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:58.724 "params": { 00:34:58.724 "name": "Nvme0", 00:34:58.724 "trtype": "tcp", 00:34:58.724 "traddr": "10.0.0.2", 00:34:58.724 "adrfam": "ipv4", 00:34:58.724 "trsvcid": "4420", 00:34:58.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.724 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.724 "hdgst": false, 00:34:58.724 "ddgst": false 00:34:58.724 }, 00:34:58.724 "method": "bdev_nvme_attach_controller" 00:34:58.724 }' 00:34:58.724 20:29:56 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:58.724 20:29:56 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:58.724 20:29:56 -- common/autotest_common.sh@1326 -- # break 00:34:58.724 20:29:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.724 20:29:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.983 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:58.983 fio-3.35 00:34:58.983 Starting 1 thread 00:34:59.241 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.500 [2024-04-25 20:29:57.389267] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:59.500 [2024-04-25 20:29:57.389330] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:11.789 00:35:11.789 filename0: (groupid=0, jobs=1): err= 0: pid=1783365: Thu Apr 25 20:30:07 2024 00:35:11.789 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10039msec) 00:35:11.789 slat (nsec): min=5993, max=34379, avg=7099.70, stdev=1889.89 00:35:11.789 clat (usec): min=507, max=42349, avg=21025.49, stdev=20194.16 00:35:11.789 lat (usec): min=513, max=42383, avg=21032.59, stdev=20193.86 00:35:11.789 clat percentiles (usec): 00:35:11.789 | 1.00th=[ 545], 5.00th=[ 742], 10.00th=[ 766], 20.00th=[ 783], 00:35:11.789 | 30.00th=[ 824], 40.00th=[ 857], 50.00th=[40633], 60.00th=[41157], 00:35:11.789 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:11.789 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:11.789 | 99.99th=[42206] 00:35:11.789 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.60, stdev=19.70, samples=20 00:35:11.789 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:35:11.789 lat (usec) : 750=5.61%, 1000=44.29% 00:35:11.789 lat (msec) : 50=50.10% 00:35:11.789 cpu : usr=96.14%, sys=3.55%, ctx=14, majf=0, minf=1635 00:35:11.789 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.789 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.789 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:11.789 00:35:11.789 Run status group 0 (all jobs): 00:35:11.789 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7632KiB (7815kB), run=10039-10039msec 00:35:11.789 ----------------------------------------------------- 00:35:11.789 Suppressions used: 00:35:11.789 count bytes template 00:35:11.789 1 8 /usr/src/fio/parse.c 00:35:11.789 1 8 libtcmalloc_minimal.so 00:35:11.789 1 904 libcrypto.so 00:35:11.789 ----------------------------------------------------- 00:35:11.789 00:35:11.789 20:30:08 -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:11.789 20:30:08 -- target/dif.sh@43 -- # local sub 00:35:11.789 20:30:08 -- target/dif.sh@45 -- # for sub in "$@" 00:35:11.789 20:30:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:11.789 20:30:08 -- target/dif.sh@36 -- # local sub_id=0 00:35:11.789 20:30:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 20:30:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 00:35:11.789 real 0m11.700s 00:35:11.789 user 0m23.728s 00:35:11.789 sys 0m0.775s 00:35:11.789 20:30:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 ************************************ 00:35:11.789 END TEST fio_dif_1_default 00:35:11.789 ************************************ 00:35:11.789 20:30:08 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:11.789 20:30:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:11.789 20:30:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 ************************************ 00:35:11.789 START TEST fio_dif_1_multi_subsystems 00:35:11.789 ************************************ 00:35:11.789 20:30:08 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:35:11.789 20:30:08 -- target/dif.sh@92 -- # local files=1 00:35:11.789 20:30:08 -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:11.789 20:30:08 -- target/dif.sh@28 -- # local sub 00:35:11.789 20:30:08 -- target/dif.sh@30 -- # for sub in "$@" 00:35:11.789 20:30:08 -- target/dif.sh@31 -- # create_subsystem 0 00:35:11.789 20:30:08 -- target/dif.sh@18 -- # local sub_id=0 00:35:11.789 20:30:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 bdev_null0 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 20:30:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 20:30:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 20:30:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 [2024-04-25 20:30:08.226942] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 20:30:08 -- target/dif.sh@30 -- # for sub in "$@" 00:35:11.789 20:30:08 -- target/dif.sh@31 -- # create_subsystem 1 00:35:11.789 20:30:08 -- target/dif.sh@18 -- # local sub_id=1 00:35:11.789 20:30:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 bdev_null1 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 20:30:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 20:30:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 20:30:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.789 20:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.789 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:35:11.789 20:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.789 20:30:08 -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:11.789 20:30:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.789 20:30:08 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.789 20:30:08 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:11.789 20:30:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:11.789 20:30:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:11.789 20:30:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:11.789 20:30:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:11.789 20:30:08 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.789 20:30:08 -- common/autotest_common.sh@1320 -- # shift 00:35:11.789 20:30:08 -- nvmf/common.sh@520 -- # config=() 00:35:11.789 20:30:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:11.789 20:30:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:11.789 20:30:08 -- nvmf/common.sh@520 -- # local subsystem config 00:35:11.789 20:30:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:11.789 20:30:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:11.789 { 00:35:11.789 "params": { 00:35:11.789 "name": "Nvme$subsystem", 00:35:11.789 "trtype": "$TEST_TRANSPORT", 00:35:11.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.789 "adrfam": "ipv4", 00:35:11.789 "trsvcid": "$NVMF_PORT", 00:35:11.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.789 "hdgst": ${hdgst:-false}, 00:35:11.789 "ddgst": ${ddgst:-false} 00:35:11.789 }, 00:35:11.789 "method": "bdev_nvme_attach_controller" 00:35:11.789 } 00:35:11.789 EOF 00:35:11.789 )") 00:35:11.789 20:30:08 -- target/dif.sh@82 -- # gen_fio_conf 00:35:11.789 20:30:08 -- target/dif.sh@54 -- # local file 00:35:11.789 20:30:08 -- target/dif.sh@56 -- # cat 00:35:11.789 20:30:08 -- nvmf/common.sh@542 -- # cat 00:35:11.789 20:30:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.789 20:30:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:11.789 20:30:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:11.789 20:30:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:11.789 20:30:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:11.790 { 00:35:11.790 "params": { 00:35:11.790 "name": "Nvme$subsystem", 00:35:11.790 "trtype": "$TEST_TRANSPORT", 00:35:11.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.790 "adrfam": "ipv4", 00:35:11.790 "trsvcid": "$NVMF_PORT", 00:35:11.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.790 "hdgst": ${hdgst:-false}, 00:35:11.790 "ddgst": ${ddgst:-false} 00:35:11.790 }, 00:35:11.790 "method": "bdev_nvme_attach_controller" 00:35:11.790 } 00:35:11.790 EOF 00:35:11.790 )") 00:35:11.790 20:30:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:11.790 20:30:08 -- target/dif.sh@72 -- # (( file <= files )) 00:35:11.790 20:30:08 -- target/dif.sh@73 -- # cat 00:35:11.790 20:30:08 -- nvmf/common.sh@542 -- # cat 00:35:11.790 20:30:08 -- target/dif.sh@72 -- # (( file++ )) 00:35:11.790 20:30:08 -- target/dif.sh@72 -- # (( file <= files )) 00:35:11.790 20:30:08 -- nvmf/common.sh@544 -- # jq . 00:35:11.790 20:30:08 -- nvmf/common.sh@545 -- # IFS=, 00:35:11.790 20:30:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:11.790 "params": { 00:35:11.790 "name": "Nvme0", 00:35:11.790 "trtype": "tcp", 00:35:11.790 "traddr": "10.0.0.2", 00:35:11.790 "adrfam": "ipv4", 00:35:11.790 "trsvcid": "4420", 00:35:11.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.790 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.790 "hdgst": false, 00:35:11.790 "ddgst": false 00:35:11.790 }, 00:35:11.790 "method": "bdev_nvme_attach_controller" 00:35:11.790 },{ 00:35:11.790 "params": { 00:35:11.790 "name": "Nvme1", 00:35:11.790 "trtype": "tcp", 00:35:11.790 "traddr": "10.0.0.2", 00:35:11.790 "adrfam": "ipv4", 00:35:11.790 "trsvcid": "4420", 00:35:11.790 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:11.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:11.790 "hdgst": false, 00:35:11.790 "ddgst": false 00:35:11.790 }, 00:35:11.790 "method": "bdev_nvme_attach_controller" 00:35:11.790 }' 00:35:11.790 20:30:08 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:11.790 20:30:08 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:11.790 20:30:08 -- common/autotest_common.sh@1326 -- # break 00:35:11.790 20:30:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:11.790 20:30:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.790 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:11.790 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:11.790 fio-3.35 00:35:11.790 Starting 2 threads 00:35:11.790 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.790 [2024-04-25 20:30:09.512745] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:11.790 [2024-04-25 20:30:09.512843] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:21.766 00:35:21.766 filename0: (groupid=0, jobs=1): err= 0: pid=1786473: Thu Apr 25 20:30:19 2024 00:35:21.766 read: IOPS=189, BW=757KiB/s (775kB/s)(7600KiB/10041msec) 00:35:21.766 slat (nsec): min=3600, max=23390, avg=6536.07, stdev=957.99 00:35:21.766 clat (usec): min=686, max=44519, avg=21119.81, stdev=20220.02 00:35:21.766 lat (usec): min=692, max=44542, avg=21126.35, stdev=20219.76 00:35:21.766 clat percentiles (usec): 00:35:21.766 | 1.00th=[ 742], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 766], 00:35:21.766 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[41157], 60.00th=[41157], 00:35:21.766 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:21.766 | 99.00th=[41157], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:35:21.766 | 99.99th=[44303] 00:35:21.766 bw ( KiB/s): min= 704, max= 768, per=50.13%, avg=758.40, stdev=23.45, samples=20 00:35:21.766 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:35:21.766 lat (usec) : 750=2.79%, 1000=46.47% 00:35:21.766 lat (msec) : 2=0.42%, 50=50.32% 00:35:21.766 cpu : usr=98.63%, sys=1.11%, ctx=13, majf=0, minf=1635 00:35:21.766 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.766 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.766 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:21.766 filename1: (groupid=0, jobs=1): err= 0: pid=1786474: Thu Apr 25 20:30:19 2024 00:35:21.766 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:35:21.766 slat (nsec): min=3261, max=18624, avg=6614.61, stdev=1107.08 00:35:21.766 clat (usec): min=737, max=43147, avg=21078.71, stdev=20189.29 00:35:21.766 lat (usec): min=743, max=43166, avg=21085.33, stdev=20188.93 00:35:21.766 clat percentiles (usec): 00:35:21.766 | 1.00th=[ 750], 5.00th=[ 766], 10.00th=[ 775], 20.00th=[ 807], 00:35:21.766 | 30.00th=[ 824], 40.00th=[ 832], 50.00th=[41157], 60.00th=[41157], 00:35:21.766 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:21.766 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:21.766 | 99.99th=[43254] 00:35:21.766 bw ( KiB/s): min= 702, max= 768, per=50.06%, avg=757.79, stdev=24.23, samples=19 00:35:21.766 iops : min= 175, max= 192, avg=189.42, stdev= 6.12, samples=19 00:35:21.766 lat (usec) : 750=1.16%, 1000=48.42% 00:35:21.766 lat (msec) : 2=0.21%, 50=50.21% 00:35:21.766 cpu : usr=98.65%, sys=1.08%, ctx=13, majf=0, minf=1633 00:35:21.766 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.766 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.766 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:21.766 00:35:21.766 Run status group 0 (all jobs): 00:35:21.766 READ: bw=1512KiB/s (1548kB/s), 757KiB/s-758KiB/s (775kB/s-777kB/s), io=14.8MiB (15.5MB), run=10001-10041msec 00:35:22.706 ----------------------------------------------------- 00:35:22.706 Suppressions used: 00:35:22.706 count bytes template 00:35:22.706 2 16 /usr/src/fio/parse.c 00:35:22.706 1 8 libtcmalloc_minimal.so 00:35:22.706 1 904 libcrypto.so 00:35:22.706 ----------------------------------------------------- 00:35:22.706 00:35:22.706 20:30:20 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:22.706 20:30:20 -- target/dif.sh@43 -- # local sub 00:35:22.706 20:30:20 -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.706 20:30:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:22.706 20:30:20 -- target/dif.sh@36 -- # local sub_id=0 00:35:22.706 20:30:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:22.706 20:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.706 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.706 20:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.706 20:30:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:22.706 20:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.706 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.706 20:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.706 20:30:20 -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.706 20:30:20 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:22.706 20:30:20 -- target/dif.sh@36 -- # local sub_id=1 00:35:22.706 20:30:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:22.706 20:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.706 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.706 20:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.706 20:30:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:22.706 20:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.706 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.706 20:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.706 00:35:22.706 real 0m12.334s 00:35:22.706 user 0m33.643s 00:35:22.706 sys 0m0.783s 00:35:22.706 20:30:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:22.706 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.706 ************************************ 00:35:22.706 END TEST fio_dif_1_multi_subsystems 00:35:22.706 ************************************ 00:35:22.706 20:30:20 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:22.706 20:30:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:22.706 20:30:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:22.706 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.706 ************************************ 00:35:22.706 START TEST fio_dif_rand_params 00:35:22.706 ************************************ 00:35:22.706 20:30:20 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:35:22.706 20:30:20 -- target/dif.sh@100 -- # local NULL_DIF 00:35:22.706 20:30:20 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:22.706 20:30:20 -- target/dif.sh@103 -- # NULL_DIF=3 00:35:22.706 20:30:20 -- target/dif.sh@103 -- # bs=128k 00:35:22.706 20:30:20 -- target/dif.sh@103 -- # numjobs=3 00:35:22.706 20:30:20 -- target/dif.sh@103 -- # iodepth=3 00:35:22.706 20:30:20 -- target/dif.sh@103 -- # runtime=5 00:35:22.706 20:30:20 -- target/dif.sh@105 -- # create_subsystems 0 00:35:22.706 20:30:20 -- target/dif.sh@28 -- # local sub 00:35:22.706 20:30:20 -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.706 20:30:20 -- target/dif.sh@31 -- # create_subsystem 0 00:35:22.706 20:30:20 -- target/dif.sh@18 -- # local sub_id=0 00:35:22.706 20:30:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:22.706 20:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.706 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.706 bdev_null0 00:35:22.706 20:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.706 20:30:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:22.706 20:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.706 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.706 20:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.706 20:30:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:22.706 20:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.706 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.707 20:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.707 20:30:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:22.707 20:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:22.707 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:35:22.707 [2024-04-25 20:30:20.592200] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.707 20:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:22.707 20:30:20 -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:22.707 20:30:20 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.707 20:30:20 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.707 20:30:20 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:22.707 20:30:20 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:22.707 20:30:20 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:22.707 20:30:20 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.707 20:30:20 -- common/autotest_common.sh@1320 -- # shift 00:35:22.707 20:30:20 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:22.707 20:30:20 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:22.707 20:30:20 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.707 20:30:20 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:22.707 20:30:20 -- nvmf/common.sh@520 -- # config=() 00:35:22.707 20:30:20 -- nvmf/common.sh@520 -- # local subsystem config 00:35:22.707 20:30:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:22.707 20:30:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:22.707 { 00:35:22.707 "params": { 00:35:22.707 "name": "Nvme$subsystem", 00:35:22.707 "trtype": "$TEST_TRANSPORT", 00:35:22.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.707 "adrfam": "ipv4", 00:35:22.707 "trsvcid": "$NVMF_PORT", 00:35:22.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.707 "hdgst": ${hdgst:-false}, 00:35:22.707 "ddgst": ${ddgst:-false} 00:35:22.707 }, 00:35:22.707 "method": "bdev_nvme_attach_controller" 00:35:22.707 } 00:35:22.707 EOF 00:35:22.707 )") 00:35:22.707 20:30:20 -- target/dif.sh@82 -- # gen_fio_conf 00:35:22.707 20:30:20 -- target/dif.sh@54 -- # local file 00:35:22.707 20:30:20 -- target/dif.sh@56 -- # cat 00:35:22.707 20:30:20 -- nvmf/common.sh@542 -- # cat 00:35:22.707 20:30:20 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.707 20:30:20 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:22.707 20:30:20 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:22.707 20:30:20 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:22.707 20:30:20 -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.707 20:30:20 -- nvmf/common.sh@544 -- # jq . 00:35:22.707 20:30:20 -- nvmf/common.sh@545 -- # IFS=, 00:35:22.707 20:30:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:22.707 "params": { 00:35:22.707 "name": "Nvme0", 00:35:22.707 "trtype": "tcp", 00:35:22.707 "traddr": "10.0.0.2", 00:35:22.707 "adrfam": "ipv4", 00:35:22.707 "trsvcid": "4420", 00:35:22.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:22.707 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:22.707 "hdgst": false, 00:35:22.707 "ddgst": false 00:35:22.707 }, 00:35:22.707 "method": "bdev_nvme_attach_controller" 00:35:22.707 }' 00:35:22.707 20:30:20 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:22.707 20:30:20 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:22.707 20:30:20 -- common/autotest_common.sh@1326 -- # break 00:35:22.707 20:30:20 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:22.707 20:30:20 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.283 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:23.283 ... 00:35:23.283 fio-3.35 00:35:23.283 Starting 3 threads 00:35:23.283 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.852 [2024-04-25 20:30:21.747009] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:23.852 [2024-04-25 20:30:21.747066] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:29.146 00:35:29.146 filename0: (groupid=0, jobs=1): err= 0: pid=1789011: Thu Apr 25 20:30:26 2024 00:35:29.146 read: IOPS=295, BW=37.0MiB/s (38.8MB/s)(186MiB/5043msec) 00:35:29.146 slat (nsec): min=6043, max=29813, avg=8718.86, stdev=3342.51 00:35:29.146 clat (usec): min=3291, max=90520, avg=10108.20, stdev=11925.52 00:35:29.146 lat (usec): min=3298, max=90529, avg=10116.91, stdev=11925.67 00:35:29.146 clat percentiles (usec): 00:35:29.146 | 1.00th=[ 3752], 5.00th=[ 3982], 10.00th=[ 4228], 20.00th=[ 5145], 00:35:29.146 | 30.00th=[ 5800], 40.00th=[ 6128], 50.00th=[ 6783], 60.00th=[ 7504], 00:35:29.146 | 70.00th=[ 8160], 80.00th=[ 8848], 90.00th=[11207], 95.00th=[46924], 00:35:29.146 | 99.00th=[51119], 99.50th=[56886], 99.90th=[89654], 99.95th=[90702], 00:35:29.146 | 99.99th=[90702] 00:35:29.146 bw ( KiB/s): min=26880, max=47616, per=35.40%, avg=38118.40, stdev=6168.19, samples=10 00:35:29.146 iops : min= 210, max= 372, avg=297.60, stdev=48.22, samples=10 00:35:29.146 lat (msec) : 4=5.03%, 10=80.48%, 20=6.84%, 50=6.04%, 100=1.61% 00:35:29.146 cpu : usr=95.78%, sys=3.91%, ctx=8, majf=0, minf=1635 00:35:29.146 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.146 issued rwts: total=1491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.146 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.146 filename0: (groupid=0, jobs=1): err= 0: pid=1789012: Thu Apr 25 20:30:26 2024 00:35:29.146 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(183MiB/5032msec) 00:35:29.146 slat (nsec): min=6018, max=27425, avg=8601.82, stdev=3014.65 00:35:29.146 clat (usec): min=3516, max=90409, avg=10320.42, stdev=11862.29 00:35:29.146 lat (usec): min=3523, max=90417, avg=10329.02, stdev=11862.09 00:35:29.146 clat percentiles (usec): 00:35:29.146 | 1.00th=[ 3851], 5.00th=[ 4359], 10.00th=[ 4686], 20.00th=[ 5538], 00:35:29.146 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6915], 60.00th=[ 7701], 00:35:29.146 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[11338], 95.00th=[47449], 00:35:29.146 | 99.00th=[51643], 99.50th=[54264], 99.90th=[89654], 99.95th=[90702], 00:35:29.146 | 99.99th=[90702] 00:35:29.146 bw ( KiB/s): min=17664, max=53397, per=34.66%, avg=37314.10, stdev=10199.00, samples=10 00:35:29.146 iops : min= 138, max= 417, avg=291.50, stdev=79.65, samples=10 00:35:29.146 lat (msec) : 4=2.40%, 10=81.45%, 20=8.83%, 50=5.54%, 100=1.78% 00:35:29.146 cpu : usr=96.22%, sys=3.46%, ctx=6, majf=0, minf=1634 00:35:29.146 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.146 issued rwts: total=1461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.146 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.146 filename0: (groupid=0, jobs=1): err= 0: pid=1789013: Thu Apr 25 20:30:26 2024 00:35:29.146 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(161MiB/5041msec) 00:35:29.146 slat (nsec): min=5983, max=33232, avg=8191.87, stdev=3212.46 00:35:29.146 clat (usec): min=3464, max=90873, avg=11710.57, stdev=13437.22 00:35:29.146 lat (usec): min=3470, max=90886, avg=11718.76, stdev=13437.52 00:35:29.146 clat percentiles (usec): 00:35:29.146 | 1.00th=[ 3720], 5.00th=[ 3982], 10.00th=[ 4228], 20.00th=[ 5669], 00:35:29.146 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7439], 60.00th=[ 8455], 00:35:29.146 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[44827], 95.00th=[48497], 00:35:29.146 | 99.00th=[51119], 99.50th=[53216], 99.90th=[90702], 99.95th=[90702], 00:35:29.146 | 99.99th=[90702] 00:35:29.146 bw ( KiB/s): min=17920, max=48640, per=30.61%, avg=32954.20, stdev=9573.59, samples=10 00:35:29.146 iops : min= 140, max= 380, avg=257.40, stdev=74.78, samples=10 00:35:29.146 lat (msec) : 4=5.27%, 10=72.33%, 20=12.17%, 50=8.45%, 100=1.78% 00:35:29.146 cpu : usr=96.73%, sys=2.94%, ctx=7, majf=0, minf=1637 00:35:29.146 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.146 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.146 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.146 00:35:29.146 Run status group 0 (all jobs): 00:35:29.146 READ: bw=105MiB/s (110MB/s), 32.0MiB/s-37.0MiB/s (33.5MB/s-38.8MB/s), io=530MiB (556MB), run=5032-5043msec 00:35:29.716 ----------------------------------------------------- 00:35:29.716 Suppressions used: 00:35:29.716 count bytes template 00:35:29.716 5 44 /usr/src/fio/parse.c 00:35:29.716 1 8 libtcmalloc_minimal.so 00:35:29.716 1 904 libcrypto.so 00:35:29.716 ----------------------------------------------------- 00:35:29.716 00:35:29.716 20:30:27 -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:29.716 20:30:27 -- target/dif.sh@43 -- # local sub 00:35:29.716 20:30:27 -- target/dif.sh@45 -- # for sub in "$@" 00:35:29.716 20:30:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:29.716 20:30:27 -- target/dif.sh@36 -- # local sub_id=0 00:35:29.716 20:30:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:29.716 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.716 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.716 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.716 20:30:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:29.716 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.716 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.716 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.716 20:30:27 -- target/dif.sh@109 -- # NULL_DIF=2 00:35:29.716 20:30:27 -- target/dif.sh@109 -- # bs=4k 00:35:29.716 20:30:27 -- target/dif.sh@109 -- # numjobs=8 00:35:29.716 20:30:27 -- target/dif.sh@109 -- # iodepth=16 00:35:29.716 20:30:27 -- target/dif.sh@109 -- # runtime= 00:35:29.716 20:30:27 -- target/dif.sh@109 -- # files=2 00:35:29.716 20:30:27 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:29.716 20:30:27 -- target/dif.sh@28 -- # local sub 00:35:29.716 20:30:27 -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.716 20:30:27 -- target/dif.sh@31 -- # create_subsystem 0 00:35:29.716 20:30:27 -- target/dif.sh@18 -- # local sub_id=0 00:35:29.716 20:30:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:29.716 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.716 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.716 bdev_null0 00:35:29.716 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.716 20:30:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:29.716 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.716 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.716 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.716 20:30:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:29.716 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.716 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.716 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.716 20:30:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:29.716 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.716 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.716 [2024-04-25 20:30:27.601833] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.716 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.716 20:30:27 -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.716 20:30:27 -- target/dif.sh@31 -- # create_subsystem 1 00:35:29.716 20:30:27 -- target/dif.sh@18 -- # local sub_id=1 00:35:29.716 20:30:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:29.716 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.717 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.717 bdev_null1 00:35:29.717 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.717 20:30:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:29.717 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.717 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.717 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.717 20:30:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:29.717 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.717 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.717 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.717 20:30:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:29.717 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.717 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.717 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.717 20:30:27 -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.717 20:30:27 -- target/dif.sh@31 -- # create_subsystem 2 00:35:29.717 20:30:27 -- target/dif.sh@18 -- # local sub_id=2 00:35:29.717 20:30:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:29.717 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.717 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.717 bdev_null2 00:35:29.717 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.717 20:30:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:29.717 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.717 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.978 20:30:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:29.978 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.978 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.978 20:30:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:29.978 20:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:29.978 20:30:27 -- common/autotest_common.sh@10 -- # set +x 00:35:29.978 20:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:29.978 20:30:27 -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:29.978 20:30:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.978 20:30:27 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.978 20:30:27 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:29.978 20:30:27 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:29.978 20:30:27 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:29.978 20:30:27 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:29.978 20:30:27 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.978 20:30:27 -- common/autotest_common.sh@1320 -- # shift 00:35:29.978 20:30:27 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:29.978 20:30:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.978 20:30:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:29.978 20:30:27 -- nvmf/common.sh@520 -- # config=() 00:35:29.978 20:30:27 -- nvmf/common.sh@520 -- # local subsystem config 00:35:29.978 20:30:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:29.978 20:30:27 -- target/dif.sh@82 -- # gen_fio_conf 00:35:29.978 20:30:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:29.978 { 00:35:29.978 "params": { 00:35:29.978 "name": "Nvme$subsystem", 00:35:29.978 "trtype": "$TEST_TRANSPORT", 00:35:29.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.978 "adrfam": "ipv4", 00:35:29.978 "trsvcid": "$NVMF_PORT", 00:35:29.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.978 "hdgst": ${hdgst:-false}, 00:35:29.978 "ddgst": ${ddgst:-false} 00:35:29.978 }, 00:35:29.978 "method": "bdev_nvme_attach_controller" 00:35:29.978 } 00:35:29.978 EOF 00:35:29.978 )") 00:35:29.978 20:30:27 -- target/dif.sh@54 -- # local file 00:35:29.978 20:30:27 -- target/dif.sh@56 -- # cat 00:35:29.978 20:30:27 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:29.978 20:30:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:29.978 20:30:27 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.978 20:30:27 -- nvmf/common.sh@542 -- # cat 00:35:29.978 20:30:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:29.978 20:30:27 -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.978 20:30:27 -- target/dif.sh@73 -- # cat 00:35:29.978 20:30:27 -- target/dif.sh@72 -- # (( file++ )) 00:35:29.978 20:30:27 -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.978 20:30:27 -- target/dif.sh@73 -- # cat 00:35:29.978 20:30:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:29.978 20:30:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:29.978 { 00:35:29.978 "params": { 00:35:29.978 "name": "Nvme$subsystem", 00:35:29.978 "trtype": "$TEST_TRANSPORT", 00:35:29.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.978 "adrfam": "ipv4", 00:35:29.978 "trsvcid": "$NVMF_PORT", 00:35:29.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.978 "hdgst": ${hdgst:-false}, 00:35:29.978 "ddgst": ${ddgst:-false} 00:35:29.978 }, 00:35:29.978 "method": "bdev_nvme_attach_controller" 00:35:29.978 } 00:35:29.978 EOF 00:35:29.978 )") 00:35:29.978 20:30:27 -- nvmf/common.sh@542 -- # cat 00:35:29.978 20:30:27 -- target/dif.sh@72 -- # (( file++ )) 00:35:29.978 20:30:27 -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.978 20:30:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:29.978 20:30:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:29.978 { 00:35:29.978 "params": { 00:35:29.978 "name": "Nvme$subsystem", 00:35:29.978 "trtype": "$TEST_TRANSPORT", 00:35:29.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.978 "adrfam": "ipv4", 00:35:29.978 "trsvcid": "$NVMF_PORT", 00:35:29.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.978 "hdgst": ${hdgst:-false}, 00:35:29.978 "ddgst": ${ddgst:-false} 00:35:29.978 }, 00:35:29.978 "method": "bdev_nvme_attach_controller" 00:35:29.978 } 00:35:29.978 EOF 00:35:29.978 )") 00:35:29.978 20:30:27 -- nvmf/common.sh@542 -- # cat 00:35:29.978 20:30:27 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:29.978 20:30:27 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:29.978 20:30:27 -- common/autotest_common.sh@1326 -- # break 00:35:29.978 20:30:27 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:29.978 20:30:27 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.978 20:30:27 -- nvmf/common.sh@544 -- # jq . 00:35:29.978 20:30:27 -- nvmf/common.sh@545 -- # IFS=, 00:35:29.978 20:30:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:29.978 "params": { 00:35:29.978 "name": "Nvme0", 00:35:29.978 "trtype": "tcp", 00:35:29.978 "traddr": "10.0.0.2", 00:35:29.978 "adrfam": "ipv4", 00:35:29.978 "trsvcid": "4420", 00:35:29.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:29.978 "hdgst": false, 00:35:29.978 "ddgst": false 00:35:29.978 }, 00:35:29.978 "method": "bdev_nvme_attach_controller" 00:35:29.978 },{ 00:35:29.978 "params": { 00:35:29.978 "name": "Nvme1", 00:35:29.978 "trtype": "tcp", 00:35:29.978 "traddr": "10.0.0.2", 00:35:29.978 "adrfam": "ipv4", 00:35:29.978 "trsvcid": "4420", 00:35:29.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:29.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:29.978 "hdgst": false, 00:35:29.978 "ddgst": false 00:35:29.978 }, 00:35:29.978 "method": "bdev_nvme_attach_controller" 00:35:29.978 },{ 00:35:29.978 "params": { 00:35:29.978 "name": "Nvme2", 00:35:29.978 "trtype": "tcp", 00:35:29.978 "traddr": "10.0.0.2", 00:35:29.978 "adrfam": "ipv4", 00:35:29.978 "trsvcid": "4420", 00:35:29.978 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:29.978 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:29.978 "hdgst": false, 00:35:29.978 "ddgst": false 00:35:29.978 }, 00:35:29.978 "method": "bdev_nvme_attach_controller" 00:35:29.978 }' 00:35:30.237 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:30.237 ... 00:35:30.237 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:30.237 ... 00:35:30.237 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:30.237 ... 00:35:30.237 fio-3.35 00:35:30.237 Starting 24 threads 00:35:30.237 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.173 [2024-04-25 20:30:29.002642] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:31.173 [2024-04-25 20:30:29.002708] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:43.371 00:35:43.372 filename0: (groupid=0, jobs=1): err= 0: pid=1790645: Thu Apr 25 20:30:39 2024 00:35:43.372 read: IOPS=530, BW=2121KiB/s (2171kB/s)(20.8MiB/10020msec) 00:35:43.372 slat (usec): min=3, max=461, avg=35.35, stdev=29.14 00:35:43.372 clat (usec): min=6099, max=33929, avg=29837.80, stdev=2064.66 00:35:43.372 lat (usec): min=6106, max=33940, avg=29873.15, stdev=2066.81 00:35:43.372 clat percentiles (usec): 00:35:43.372 | 1.00th=[24511], 5.00th=[28967], 10.00th=[29230], 20.00th=[29754], 00:35:43.372 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:43.372 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.372 | 99.00th=[31327], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:35:43.372 | 99.99th=[33817] 00:35:43.372 bw ( KiB/s): min= 2048, max= 2308, per=4.21%, avg=2118.60, stdev=77.92, samples=20 00:35:43.372 iops : min= 512, max= 577, avg=529.65, stdev=19.48, samples=20 00:35:43.372 lat (msec) : 10=0.43%, 20=0.47%, 50=99.10% 00:35:43.372 cpu : usr=97.07%, sys=1.52%, ctx=105, majf=0, minf=1634 00:35:43.372 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:43.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.372 filename0: (groupid=0, jobs=1): err= 0: pid=1790646: Thu Apr 25 20:30:39 2024 00:35:43.372 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10016msec) 00:35:43.372 slat (usec): min=6, max=170, avg=37.55, stdev=28.38 00:35:43.372 clat (usec): min=21513, max=42896, avg=30079.60, stdev=1172.41 00:35:43.372 lat (usec): min=21527, max=42935, avg=30117.15, stdev=1170.36 00:35:43.372 clat percentiles (usec): 00:35:43.372 | 1.00th=[28181], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:35:43.372 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:43.372 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.372 | 99.00th=[32113], 99.50th=[36963], 99.90th=[42730], 99.95th=[42730], 00:35:43.372 | 99.99th=[42730] 00:35:43.372 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2099.20, stdev=62.85, samples=20 00:35:43.372 iops : min= 512, max= 544, avg=524.80, stdev=15.71, samples=20 00:35:43.372 lat (msec) : 50=100.00% 00:35:43.372 cpu : usr=97.17%, sys=1.37%, ctx=77, majf=0, minf=1633 00:35:43.372 IO depths : 1=5.3%, 2=11.5%, 4=24.9%, 8=51.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:43.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.372 filename0: (groupid=0, jobs=1): err= 0: pid=1790647: Thu Apr 25 20:30:39 2024 00:35:43.372 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:35:43.372 slat (usec): min=5, max=514, avg=40.09, stdev=30.51 00:35:43.372 clat (usec): min=17502, max=51889, avg=30156.74, stdev=2121.26 00:35:43.372 lat (usec): min=17514, max=51916, avg=30196.83, stdev=2119.64 00:35:43.372 clat percentiles (usec): 00:35:43.372 | 1.00th=[24249], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:35:43.372 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:43.372 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.372 | 99.00th=[36963], 99.50th=[44303], 99.90th=[51643], 99.95th=[51643], 00:35:43.372 | 99.99th=[51643] 00:35:43.372 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2092.80, stdev=75.15, samples=20 00:35:43.372 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:43.372 lat (msec) : 20=0.38%, 50=99.31%, 100=0.30% 00:35:43.372 cpu : usr=99.13%, sys=0.50%, ctx=17, majf=0, minf=1632 00:35:43.372 IO depths : 1=5.1%, 2=11.2%, 4=24.3%, 8=52.0%, 16=7.4%, 32=0.0%, >=64=0.0% 00:35:43.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.372 filename0: (groupid=0, jobs=1): err= 0: pid=1790648: Thu Apr 25 20:30:39 2024 00:35:43.372 read: IOPS=530, BW=2120KiB/s (2171kB/s)(20.8MiB/10022msec) 00:35:43.372 slat (usec): min=3, max=508, avg=22.84, stdev=20.81 00:35:43.372 clat (usec): min=5280, max=42790, avg=30005.11, stdev=2056.94 00:35:43.372 lat (usec): min=5288, max=42798, avg=30027.95, stdev=2057.21 00:35:43.372 clat percentiles (usec): 00:35:43.372 | 1.00th=[23200], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:35:43.372 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:43.372 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.372 | 99.00th=[31589], 99.50th=[31851], 99.90th=[42206], 99.95th=[42206], 00:35:43.372 | 99.99th=[42730] 00:35:43.372 bw ( KiB/s): min= 2048, max= 2304, per=4.21%, avg=2118.40, stdev=77.42, samples=20 00:35:43.372 iops : min= 512, max= 576, avg=529.60, stdev=19.35, samples=20 00:35:43.372 lat (msec) : 10=0.60%, 20=0.11%, 50=99.28% 00:35:43.372 cpu : usr=99.08%, sys=0.47%, ctx=89, majf=0, minf=1637 00:35:43.372 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:43.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.372 filename0: (groupid=0, jobs=1): err= 0: pid=1790649: Thu Apr 25 20:30:39 2024 00:35:43.372 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10024msec) 00:35:43.372 slat (usec): min=4, max=185, avg=33.59, stdev=29.22 00:35:43.372 clat (usec): min=5234, max=44620, avg=29762.28, stdev=2597.18 00:35:43.372 lat (usec): min=5245, max=44663, avg=29795.87, stdev=2599.58 00:35:43.372 clat percentiles (usec): 00:35:43.372 | 1.00th=[16581], 5.00th=[28181], 10.00th=[29230], 20.00th=[29754], 00:35:43.372 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:43.372 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.372 | 99.00th=[32113], 99.50th=[34866], 99.90th=[40109], 99.95th=[40633], 00:35:43.372 | 99.99th=[44827] 00:35:43.372 bw ( KiB/s): min= 2048, max= 2464, per=4.23%, avg=2132.00, stdev=98.61, samples=20 00:35:43.372 iops : min= 512, max= 616, avg=533.00, stdev=24.65, samples=20 00:35:43.372 lat (msec) : 10=0.43%, 20=1.53%, 50=98.04% 00:35:43.372 cpu : usr=98.93%, sys=0.62%, ctx=42, majf=0, minf=1635 00:35:43.372 IO depths : 1=2.0%, 2=7.9%, 4=23.9%, 8=55.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:35:43.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 complete : 0=0.0%, 4=94.1%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.372 issued rwts: total=5346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.372 filename0: (groupid=0, jobs=1): err= 0: pid=1790650: Thu Apr 25 20:30:39 2024 00:35:43.372 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10007msec) 00:35:43.372 slat (usec): min=5, max=185, avg=49.14, stdev=31.87 00:35:43.372 clat (usec): min=21524, max=62904, avg=30000.99, stdev=2016.60 00:35:43.372 lat (usec): min=21542, max=62929, avg=30050.13, stdev=2015.08 00:35:43.372 clat percentiles (usec): 00:35:43.372 | 1.00th=[28443], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:35:43.372 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:35:43.372 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:35:43.372 | 99.00th=[31589], 99.50th=[37487], 99.90th=[62653], 99.95th=[62653], 00:35:43.372 | 99.99th=[62653] 00:35:43.372 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2095.16, stdev=76.45, samples=19 00:35:43.373 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:35:43.373 lat (msec) : 50=99.70%, 100=0.30% 00:35:43.373 cpu : usr=99.18%, sys=0.42%, ctx=25, majf=0, minf=1634 00:35:43.373 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:43.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.373 filename0: (groupid=0, jobs=1): err= 0: pid=1790651: Thu Apr 25 20:30:39 2024 00:35:43.373 read: IOPS=519, BW=2079KiB/s (2129kB/s)(20.4MiB/10047msec) 00:35:43.373 slat (usec): min=5, max=174, avg=34.84, stdev=30.06 00:35:43.373 clat (usec): min=22063, max=64930, avg=30439.72, stdev=2954.95 00:35:43.373 lat (usec): min=22075, max=64946, avg=30474.56, stdev=2953.23 00:35:43.373 clat percentiles (usec): 00:35:43.373 | 1.00th=[24773], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:35:43.373 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:43.373 | 70.00th=[30278], 80.00th=[30540], 90.00th=[31065], 95.00th=[31589], 00:35:43.373 | 99.00th=[42730], 99.50th=[49546], 99.90th=[64226], 99.95th=[64750], 00:35:43.373 | 99.99th=[64750] 00:35:43.373 bw ( KiB/s): min= 1904, max= 2176, per=4.14%, avg=2084.80, stdev=71.76, samples=20 00:35:43.373 iops : min= 476, max= 544, avg=521.20, stdev=17.94, samples=20 00:35:43.373 lat (msec) : 50=99.50%, 100=0.50% 00:35:43.373 cpu : usr=99.13%, sys=0.49%, ctx=13, majf=0, minf=1635 00:35:43.373 IO depths : 1=1.4%, 2=6.9%, 4=23.0%, 8=57.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:43.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 issued rwts: total=5222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.373 filename0: (groupid=0, jobs=1): err= 0: pid=1790652: Thu Apr 25 20:30:39 2024 00:35:43.373 read: IOPS=528, BW=2114KiB/s (2165kB/s)(20.7MiB/10020msec) 00:35:43.373 slat (usec): min=3, max=167, avg=48.96, stdev=32.60 00:35:43.373 clat (usec): min=5373, max=49295, avg=29839.04, stdev=2361.25 00:35:43.373 lat (usec): min=5384, max=49309, avg=29887.99, stdev=2362.79 00:35:43.373 clat percentiles (usec): 00:35:43.373 | 1.00th=[20055], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:35:43.373 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:35:43.373 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.373 | 99.00th=[35914], 99.50th=[40109], 99.90th=[40633], 99.95th=[43254], 00:35:43.373 | 99.99th=[49546] 00:35:43.373 bw ( KiB/s): min= 2048, max= 2308, per=4.20%, avg=2112.20, stdev=78.22, samples=20 00:35:43.373 iops : min= 512, max= 577, avg=528.05, stdev=19.55, samples=20 00:35:43.373 lat (msec) : 10=0.60%, 20=0.30%, 50=99.09% 00:35:43.373 cpu : usr=97.37%, sys=1.22%, ctx=73, majf=0, minf=1634 00:35:43.373 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:43.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.373 filename1: (groupid=0, jobs=1): err= 0: pid=1790653: Thu Apr 25 20:30:39 2024 00:35:43.373 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10007msec) 00:35:43.373 slat (usec): min=4, max=188, avg=53.03, stdev=32.02 00:35:43.373 clat (usec): min=21551, max=64481, avg=29977.30, stdev=2113.05 00:35:43.373 lat (usec): min=21584, max=64503, avg=30030.32, stdev=2111.86 00:35:43.373 clat percentiles (usec): 00:35:43.373 | 1.00th=[28443], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:35:43.373 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:35:43.373 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:35:43.373 | 99.00th=[31589], 99.50th=[36963], 99.90th=[64226], 99.95th=[64226], 00:35:43.373 | 99.99th=[64226] 00:35:43.373 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2092.80, stdev=75.15, samples=20 00:35:43.373 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:43.373 lat (msec) : 50=99.70%, 100=0.30% 00:35:43.373 cpu : usr=97.31%, sys=1.32%, ctx=85, majf=0, minf=1631 00:35:43.373 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:43.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.373 filename1: (groupid=0, jobs=1): err= 0: pid=1790654: Thu Apr 25 20:30:39 2024 00:35:43.373 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10006msec) 00:35:43.373 slat (usec): min=3, max=169, avg=54.48, stdev=31.82 00:35:43.373 clat (usec): min=21533, max=69872, avg=29948.91, stdev=2311.93 00:35:43.373 lat (usec): min=21542, max=69893, avg=30003.40, stdev=2311.13 00:35:43.373 clat percentiles (usec): 00:35:43.373 | 1.00th=[28443], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:35:43.373 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:35:43.373 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:35:43.373 | 99.00th=[31327], 99.50th=[31589], 99.90th=[69731], 99.95th=[69731], 00:35:43.373 | 99.99th=[69731] 00:35:43.373 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2092.95, stdev=74.79, samples=20 00:35:43.373 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:43.373 lat (msec) : 50=99.70%, 100=0.30% 00:35:43.373 cpu : usr=97.24%, sys=1.31%, ctx=109, majf=0, minf=1634 00:35:43.373 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:43.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.373 filename1: (groupid=0, jobs=1): err= 0: pid=1790655: Thu Apr 25 20:30:39 2024 00:35:43.373 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10015msec) 00:35:43.373 slat (usec): min=5, max=165, avg=54.38, stdev=32.20 00:35:43.373 clat (usec): min=17893, max=64196, avg=29924.23, stdev=1789.57 00:35:43.373 lat (usec): min=17903, max=64224, avg=29978.61, stdev=1789.81 00:35:43.373 clat percentiles (usec): 00:35:43.373 | 1.00th=[22414], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:35:43.373 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:35:43.373 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:35:43.373 | 99.00th=[36963], 99.50th=[39060], 99.90th=[47973], 99.95th=[47973], 00:35:43.373 | 99.99th=[64226] 00:35:43.373 bw ( KiB/s): min= 1923, max= 2176, per=4.17%, avg=2099.35, stdev=73.69, samples=20 00:35:43.373 iops : min= 480, max= 544, avg=524.80, stdev=18.52, samples=20 00:35:43.373 lat (msec) : 20=0.15%, 50=99.81%, 100=0.04% 00:35:43.373 cpu : usr=98.89%, sys=0.62%, ctx=124, majf=0, minf=1634 00:35:43.373 IO depths : 1=4.9%, 2=10.7%, 4=24.4%, 8=52.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:43.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.373 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.373 filename1: (groupid=0, jobs=1): err= 0: pid=1790656: Thu Apr 25 20:30:39 2024 00:35:43.373 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10005msec) 00:35:43.373 slat (usec): min=5, max=177, avg=45.70, stdev=33.15 00:35:43.373 clat (usec): min=12199, max=68758, avg=30138.44, stdev=3011.10 00:35:43.373 lat (usec): min=12213, max=68784, avg=30184.14, stdev=3010.05 00:35:43.373 clat percentiles (usec): 00:35:43.373 | 1.00th=[21365], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:35:43.373 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:35:43.373 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.373 | 99.00th=[41157], 99.50th=[45351], 99.90th=[68682], 99.95th=[68682], 00:35:43.373 | 99.99th=[68682] 00:35:43.373 bw ( KiB/s): min= 1907, max= 2224, per=4.16%, avg=2093.63, stdev=69.35, samples=19 00:35:43.373 iops : min= 476, max= 556, avg=523.37, stdev=17.45, samples=19 00:35:43.373 lat (msec) : 20=0.71%, 50=98.99%, 100=0.31% 00:35:43.373 cpu : usr=99.07%, sys=0.52%, ctx=48, majf=0, minf=1634 00:35:43.374 IO depths : 1=0.8%, 2=6.2%, 4=22.1%, 8=58.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:43.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 complete : 0=0.0%, 4=93.8%, 8=1.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 issued rwts: total=5244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.374 filename1: (groupid=0, jobs=1): err= 0: pid=1790657: Thu Apr 25 20:30:39 2024 00:35:43.374 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.6MiB/10014msec) 00:35:43.374 slat (usec): min=6, max=134, avg=44.20, stdev=23.91 00:35:43.374 clat (usec): min=21511, max=47776, avg=30079.70, stdev=1171.02 00:35:43.374 lat (usec): min=21572, max=47841, avg=30123.90, stdev=1168.73 00:35:43.374 clat percentiles (usec): 00:35:43.374 | 1.00th=[28705], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:35:43.374 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:35:43.374 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:35:43.374 | 99.00th=[31327], 99.50th=[31589], 99.90th=[47449], 99.95th=[47973], 00:35:43.374 | 99.99th=[47973] 00:35:43.374 bw ( KiB/s): min= 1923, max= 2176, per=4.17%, avg=2099.35, stdev=76.21, samples=20 00:35:43.374 iops : min= 480, max= 544, avg=524.80, stdev=19.14, samples=20 00:35:43.374 lat (msec) : 50=100.00% 00:35:43.374 cpu : usr=99.00%, sys=0.56%, ctx=18, majf=0, minf=1634 00:35:43.374 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:43.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.374 filename1: (groupid=0, jobs=1): err= 0: pid=1790658: Thu Apr 25 20:30:39 2024 00:35:43.374 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:35:43.374 slat (usec): min=5, max=126, avg=35.44, stdev=23.73 00:35:43.374 clat (usec): min=21575, max=71064, avg=30253.15, stdev=2344.65 00:35:43.374 lat (usec): min=21592, max=71094, avg=30288.60, stdev=2342.38 00:35:43.374 clat percentiles (usec): 00:35:43.374 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:35:43.374 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:43.374 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.374 | 99.00th=[31851], 99.50th=[32113], 99.90th=[70779], 99.95th=[70779], 00:35:43.374 | 99.99th=[70779] 00:35:43.374 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2092.80, stdev=75.15, samples=20 00:35:43.374 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:35:43.374 lat (msec) : 50=99.70%, 100=0.30% 00:35:43.374 cpu : usr=99.04%, sys=0.53%, ctx=15, majf=0, minf=1638 00:35:43.374 IO depths : 1=5.7%, 2=11.9%, 4=24.8%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:43.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.374 filename1: (groupid=0, jobs=1): err= 0: pid=1790659: Thu Apr 25 20:30:39 2024 00:35:43.374 read: IOPS=530, BW=2120KiB/s (2171kB/s)(20.8MiB/10021msec) 00:35:43.374 slat (usec): min=3, max=469, avg=22.10, stdev=18.83 00:35:43.374 clat (usec): min=5496, max=41799, avg=30012.47, stdev=2171.32 00:35:43.374 lat (usec): min=5508, max=41820, avg=30034.57, stdev=2172.22 00:35:43.374 clat percentiles (usec): 00:35:43.374 | 1.00th=[19006], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:35:43.374 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:43.374 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.374 | 99.00th=[31851], 99.50th=[32637], 99.90th=[40109], 99.95th=[40109], 00:35:43.374 | 99.99th=[41681] 00:35:43.374 bw ( KiB/s): min= 2048, max= 2304, per=4.21%, avg=2118.40, stdev=77.42, samples=20 00:35:43.374 iops : min= 512, max= 576, avg=529.60, stdev=19.35, samples=20 00:35:43.374 lat (msec) : 10=0.60%, 20=0.45%, 50=98.95% 00:35:43.374 cpu : usr=99.09%, sys=0.49%, ctx=17, majf=0, minf=1636 00:35:43.374 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:43.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.374 filename1: (groupid=0, jobs=1): err= 0: pid=1790660: Thu Apr 25 20:30:39 2024 00:35:43.374 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10018msec) 00:35:43.374 slat (usec): min=5, max=139, avg=26.41, stdev=23.03 00:35:43.374 clat (usec): min=11547, max=48663, avg=30165.32, stdev=1649.52 00:35:43.374 lat (usec): min=11555, max=48692, avg=30191.74, stdev=1648.18 00:35:43.374 clat percentiles (usec): 00:35:43.374 | 1.00th=[27919], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:35:43.374 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:43.374 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.374 | 99.00th=[31589], 99.50th=[32113], 99.90th=[48497], 99.95th=[48497], 00:35:43.374 | 99.99th=[48497] 00:35:43.374 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2105.60, stdev=77.42, samples=20 00:35:43.374 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:35:43.374 lat (msec) : 20=0.30%, 50=99.70% 00:35:43.374 cpu : usr=99.13%, sys=0.44%, ctx=16, majf=0, minf=1636 00:35:43.374 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:43.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.374 filename2: (groupid=0, jobs=1): err= 0: pid=1790661: Thu Apr 25 20:30:39 2024 00:35:43.374 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.7MiB/10020msec) 00:35:43.374 slat (usec): min=3, max=110, avg=18.85, stdev=16.94 00:35:43.374 clat (usec): min=14572, max=70665, avg=30066.58, stdev=3114.90 00:35:43.374 lat (usec): min=14581, max=70685, avg=30085.42, stdev=3115.31 00:35:43.374 clat percentiles (usec): 00:35:43.374 | 1.00th=[19006], 5.00th=[24249], 10.00th=[29230], 20.00th=[29754], 00:35:43.374 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:43.374 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[31589], 00:35:43.374 | 99.00th=[37487], 99.50th=[40633], 99.90th=[64226], 99.95th=[70779], 00:35:43.374 | 99.99th=[70779] 00:35:43.374 bw ( KiB/s): min= 1920, max= 2432, per=4.20%, avg=2116.80, stdev=105.00, samples=20 00:35:43.374 iops : min= 480, max= 608, avg=529.20, stdev=26.25, samples=20 00:35:43.374 lat (msec) : 20=1.07%, 50=98.62%, 100=0.30% 00:35:43.374 cpu : usr=99.02%, sys=0.55%, ctx=17, majf=0, minf=1635 00:35:43.374 IO depths : 1=2.0%, 2=7.6%, 4=22.8%, 8=57.0%, 16=10.5%, 32=0.0%, >=64=0.0% 00:35:43.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.374 issued rwts: total=5308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.374 filename2: (groupid=0, jobs=1): err= 0: pid=1790662: Thu Apr 25 20:30:39 2024 00:35:43.374 read: IOPS=525, BW=2101KiB/s (2151kB/s)(20.5MiB/10007msec) 00:35:43.374 slat (usec): min=5, max=469, avg=29.50, stdev=22.68 00:35:43.374 clat (usec): min=16457, max=65772, avg=30240.16, stdev=2754.84 00:35:43.374 lat (usec): min=16464, max=65800, avg=30269.66, stdev=2754.95 00:35:43.374 clat percentiles (usec): 00:35:43.374 | 1.00th=[22152], 5.00th=[28967], 10.00th=[29492], 20.00th=[29754], 00:35:43.374 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:43.374 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31327], 00:35:43.375 | 99.00th=[43254], 99.50th=[48497], 99.90th=[53740], 99.95th=[65799], 00:35:43.375 | 99.99th=[65799] 00:35:43.375 bw ( KiB/s): min= 1939, max= 2176, per=4.16%, avg=2096.15, stdev=70.82, samples=20 00:35:43.375 iops : min= 484, max= 544, avg=524.00, stdev=17.79, samples=20 00:35:43.375 lat (msec) : 20=0.91%, 50=98.78%, 100=0.30% 00:35:43.375 cpu : usr=99.02%, sys=0.56%, ctx=16, majf=0, minf=1636 00:35:43.375 IO depths : 1=3.9%, 2=9.5%, 4=23.0%, 8=54.9%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:43.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 issued rwts: total=5256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.375 filename2: (groupid=0, jobs=1): err= 0: pid=1790663: Thu Apr 25 20:30:39 2024 00:35:43.375 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10007msec) 00:35:43.375 slat (usec): min=6, max=120, avg=39.09, stdev=22.37 00:35:43.375 clat (usec): min=21381, max=70268, avg=30181.83, stdev=2180.49 00:35:43.375 lat (usec): min=21389, max=70303, avg=30220.92, stdev=2177.99 00:35:43.375 clat percentiles (usec): 00:35:43.375 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:35:43.375 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:43.375 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.375 | 99.00th=[36439], 99.50th=[36963], 99.90th=[64226], 99.95th=[64226], 00:35:43.375 | 99.99th=[70779] 00:35:43.375 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2092.80, stdev=73.89, samples=20 00:35:43.375 iops : min= 480, max= 544, avg=523.20, stdev=18.47, samples=20 00:35:43.375 lat (msec) : 50=99.70%, 100=0.30% 00:35:43.375 cpu : usr=98.90%, sys=0.68%, ctx=13, majf=0, minf=1635 00:35:43.375 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:43.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.375 filename2: (groupid=0, jobs=1): err= 0: pid=1790664: Thu Apr 25 20:30:39 2024 00:35:43.375 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.6MiB/10014msec) 00:35:43.375 slat (usec): min=6, max=127, avg=43.49, stdev=22.07 00:35:43.375 clat (usec): min=21427, max=47957, avg=30084.60, stdev=1280.70 00:35:43.375 lat (usec): min=21446, max=48015, avg=30128.09, stdev=1278.70 00:35:43.375 clat percentiles (usec): 00:35:43.375 | 1.00th=[28705], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:35:43.375 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:35:43.375 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:35:43.375 | 99.00th=[31851], 99.50th=[33817], 99.90th=[47973], 99.95th=[47973], 00:35:43.375 | 99.99th=[47973] 00:35:43.375 bw ( KiB/s): min= 1923, max= 2176, per=4.17%, avg=2099.35, stdev=74.96, samples=20 00:35:43.375 iops : min= 480, max= 544, avg=524.80, stdev=18.83, samples=20 00:35:43.375 lat (msec) : 50=100.00% 00:35:43.375 cpu : usr=99.01%, sys=0.55%, ctx=15, majf=0, minf=1633 00:35:43.375 IO depths : 1=5.4%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:43.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.375 filename2: (groupid=0, jobs=1): err= 0: pid=1790665: Thu Apr 25 20:30:39 2024 00:35:43.375 read: IOPS=526, BW=2106KiB/s (2156kB/s)(20.6MiB/10004msec) 00:35:43.375 slat (usec): min=3, max=117, avg=27.02, stdev=23.71 00:35:43.375 clat (usec): min=11965, max=68065, avg=30178.01, stdev=3940.38 00:35:43.375 lat (usec): min=11974, max=68086, avg=30205.03, stdev=3940.12 00:35:43.375 clat percentiles (usec): 00:35:43.375 | 1.00th=[19792], 5.00th=[24249], 10.00th=[26870], 20.00th=[29492], 00:35:43.375 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:43.375 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31327], 95.00th=[35914], 00:35:43.375 | 99.00th=[45876], 99.50th=[49021], 99.90th=[67634], 99.95th=[67634], 00:35:43.375 | 99.99th=[67634] 00:35:43.375 bw ( KiB/s): min= 1907, max= 2240, per=4.18%, avg=2102.89, stdev=80.66, samples=19 00:35:43.375 iops : min= 476, max= 560, avg=525.68, stdev=20.27, samples=19 00:35:43.375 lat (msec) : 20=1.08%, 50=98.61%, 100=0.30% 00:35:43.375 cpu : usr=99.06%, sys=0.53%, ctx=13, majf=0, minf=1636 00:35:43.375 IO depths : 1=2.5%, 2=5.6%, 4=13.8%, 8=66.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:43.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 complete : 0=0.0%, 4=91.6%, 8=4.7%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 issued rwts: total=5266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.375 filename2: (groupid=0, jobs=1): err= 0: pid=1790666: Thu Apr 25 20:30:39 2024 00:35:43.375 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.7MiB/10004msec) 00:35:43.375 slat (usec): min=6, max=135, avg=39.53, stdev=25.46 00:35:43.375 clat (usec): min=11831, max=67938, avg=29849.67, stdev=3585.42 00:35:43.375 lat (usec): min=11841, max=67966, avg=29889.20, stdev=3587.40 00:35:43.375 clat percentiles (usec): 00:35:43.375 | 1.00th=[17957], 5.00th=[24773], 10.00th=[29230], 20.00th=[29492], 00:35:43.375 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:35:43.375 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.375 | 99.00th=[43254], 99.50th=[47449], 99.90th=[67634], 99.95th=[67634], 00:35:43.375 | 99.99th=[67634] 00:35:43.375 bw ( KiB/s): min= 1923, max= 2272, per=4.21%, avg=2117.21, stdev=91.88, samples=19 00:35:43.375 iops : min= 480, max= 568, avg=529.26, stdev=23.06, samples=19 00:35:43.375 lat (msec) : 20=1.96%, 50=97.74%, 100=0.30% 00:35:43.375 cpu : usr=99.09%, sys=0.50%, ctx=13, majf=0, minf=1636 00:35:43.375 IO depths : 1=4.2%, 2=9.6%, 4=22.2%, 8=55.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:35:43.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 issued rwts: total=5300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.375 filename2: (groupid=0, jobs=1): err= 0: pid=1790667: Thu Apr 25 20:30:39 2024 00:35:43.375 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.5MiB/10007msec) 00:35:43.375 slat (usec): min=6, max=107, avg=31.17, stdev=21.31 00:35:43.375 clat (usec): min=17003, max=63693, avg=30172.69, stdev=2779.99 00:35:43.375 lat (usec): min=17010, max=63719, avg=30203.86, stdev=2780.50 00:35:43.375 clat percentiles (usec): 00:35:43.375 | 1.00th=[21365], 5.00th=[28967], 10.00th=[29492], 20.00th=[29754], 00:35:43.375 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:43.375 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:35:43.375 | 99.00th=[39584], 99.50th=[46400], 99.90th=[63701], 99.95th=[63701], 00:35:43.375 | 99.99th=[63701] 00:35:43.375 bw ( KiB/s): min= 1907, max= 2256, per=4.17%, avg=2099.35, stdev=79.29, samples=20 00:35:43.375 iops : min= 476, max= 564, avg=524.80, stdev=19.92, samples=20 00:35:43.375 lat (msec) : 20=0.53%, 50=99.16%, 100=0.30% 00:35:43.375 cpu : usr=99.05%, sys=0.53%, ctx=13, majf=0, minf=1636 00:35:43.375 IO depths : 1=4.6%, 2=10.1%, 4=22.1%, 8=54.8%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:43.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 complete : 0=0.0%, 4=93.5%, 8=1.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.375 issued rwts: total=5260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.375 filename2: (groupid=0, jobs=1): err= 0: pid=1790668: Thu Apr 25 20:30:39 2024 00:35:43.375 read: IOPS=519, BW=2077KiB/s (2127kB/s)(20.3MiB/10004msec) 00:35:43.375 slat (usec): min=5, max=121, avg=26.39, stdev=23.21 00:35:43.375 clat (usec): min=7182, max=67503, avg=30616.95, stdev=3762.74 00:35:43.375 lat (usec): min=7193, max=67529, avg=30643.34, stdev=3762.29 00:35:43.375 clat percentiles (usec): 00:35:43.375 | 1.00th=[21103], 5.00th=[28967], 10.00th=[29492], 20.00th=[29754], 00:35:43.375 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:43.375 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[34341], 00:35:43.375 | 99.00th=[46924], 99.50th=[50594], 99.90th=[67634], 99.95th=[67634], 00:35:43.375 | 99.99th=[67634] 00:35:43.376 bw ( KiB/s): min= 1920, max= 2240, per=4.12%, avg=2072.42, stdev=77.58, samples=19 00:35:43.376 iops : min= 480, max= 560, avg=518.11, stdev=19.40, samples=19 00:35:43.376 lat (msec) : 10=0.02%, 20=0.69%, 50=98.71%, 100=0.58% 00:35:43.376 cpu : usr=99.00%, sys=0.58%, ctx=14, majf=0, minf=1633 00:35:43.376 IO depths : 1=1.9%, 2=5.0%, 4=13.3%, 8=66.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:35:43.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.376 complete : 0=0.0%, 4=91.8%, 8=4.9%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.376 issued rwts: total=5194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:43.376 00:35:43.376 Run status group 0 (all jobs): 00:35:43.376 READ: bw=49.2MiB/s (51.6MB/s), 2077KiB/s-2133KiB/s (2127kB/s-2184kB/s), io=494MiB (518MB), run=10004-10047msec 00:35:43.376 ----------------------------------------------------- 00:35:43.376 Suppressions used: 00:35:43.376 count bytes template 00:35:43.376 45 402 /usr/src/fio/parse.c 00:35:43.376 1 8 libtcmalloc_minimal.so 00:35:43.376 1 904 libcrypto.so 00:35:43.376 ----------------------------------------------------- 00:35:43.376 00:35:43.376 20:30:39 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:43.376 20:30:39 -- target/dif.sh@43 -- # local sub 00:35:43.376 20:30:39 -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.376 20:30:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:43.376 20:30:39 -- target/dif.sh@36 -- # local sub_id=0 00:35:43.376 20:30:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.376 20:30:39 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:43.376 20:30:39 -- target/dif.sh@36 -- # local sub_id=1 00:35:43.376 20:30:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.376 20:30:39 -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:43.376 20:30:39 -- target/dif.sh@36 -- # local sub_id=2 00:35:43.376 20:30:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@115 -- # NULL_DIF=1 00:35:43.376 20:30:39 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:43.376 20:30:39 -- target/dif.sh@115 -- # numjobs=2 00:35:43.376 20:30:39 -- target/dif.sh@115 -- # iodepth=8 00:35:43.376 20:30:39 -- target/dif.sh@115 -- # runtime=5 00:35:43.376 20:30:39 -- target/dif.sh@115 -- # files=1 00:35:43.376 20:30:39 -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:43.376 20:30:39 -- target/dif.sh@28 -- # local sub 00:35:43.376 20:30:39 -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.376 20:30:39 -- target/dif.sh@31 -- # create_subsystem 0 00:35:43.376 20:30:39 -- target/dif.sh@18 -- # local sub_id=0 00:35:43.376 20:30:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 bdev_null0 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 [2024-04-25 20:30:39.901796] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.376 20:30:39 -- target/dif.sh@31 -- # create_subsystem 1 00:35:43.376 20:30:39 -- target/dif.sh@18 -- # local sub_id=1 00:35:43.376 20:30:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 bdev_null1 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.376 20:30:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.376 20:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:43.376 20:30:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.376 20:30:39 -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:43.376 20:30:39 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:43.376 20:30:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.376 20:30:39 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.376 20:30:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:43.376 20:30:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:43.376 20:30:39 -- nvmf/common.sh@520 -- # config=() 00:35:43.376 20:30:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:43.376 20:30:39 -- nvmf/common.sh@520 -- # local subsystem config 00:35:43.376 20:30:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:43.376 20:30:39 -- target/dif.sh@82 -- # gen_fio_conf 00:35:43.376 20:30:39 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.376 20:30:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:43.376 20:30:39 -- common/autotest_common.sh@1320 -- # shift 00:35:43.376 20:30:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:43.376 { 00:35:43.376 "params": { 00:35:43.376 "name": "Nvme$subsystem", 00:35:43.376 "trtype": "$TEST_TRANSPORT", 00:35:43.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.376 "adrfam": "ipv4", 00:35:43.376 "trsvcid": "$NVMF_PORT", 00:35:43.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.376 "hdgst": ${hdgst:-false}, 00:35:43.376 "ddgst": ${ddgst:-false} 00:35:43.376 }, 00:35:43.376 "method": "bdev_nvme_attach_controller" 00:35:43.376 } 00:35:43.376 EOF 00:35:43.376 )") 00:35:43.376 20:30:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:43.376 20:30:39 -- target/dif.sh@54 -- # local file 00:35:43.376 20:30:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:43.376 20:30:39 -- target/dif.sh@56 -- # cat 00:35:43.376 20:30:39 -- nvmf/common.sh@542 -- # cat 00:35:43.377 20:30:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:43.377 20:30:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:43.377 20:30:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:43.377 20:30:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:43.377 20:30:39 -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.377 20:30:39 -- target/dif.sh@73 -- # cat 00:35:43.377 20:30:39 -- target/dif.sh@72 -- # (( file++ )) 00:35:43.377 20:30:39 -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.377 20:30:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:43.377 20:30:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:43.377 { 00:35:43.377 "params": { 00:35:43.377 "name": "Nvme$subsystem", 00:35:43.377 "trtype": "$TEST_TRANSPORT", 00:35:43.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.377 "adrfam": "ipv4", 00:35:43.377 "trsvcid": "$NVMF_PORT", 00:35:43.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.377 "hdgst": ${hdgst:-false}, 00:35:43.377 "ddgst": ${ddgst:-false} 00:35:43.377 }, 00:35:43.377 "method": "bdev_nvme_attach_controller" 00:35:43.377 } 00:35:43.377 EOF 00:35:43.377 )") 00:35:43.377 20:30:39 -- nvmf/common.sh@542 -- # cat 00:35:43.377 20:30:39 -- nvmf/common.sh@544 -- # jq . 00:35:43.377 20:30:39 -- nvmf/common.sh@545 -- # IFS=, 00:35:43.377 20:30:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:43.377 "params": { 00:35:43.377 "name": "Nvme0", 00:35:43.377 "trtype": "tcp", 00:35:43.377 "traddr": "10.0.0.2", 00:35:43.377 "adrfam": "ipv4", 00:35:43.377 "trsvcid": "4420", 00:35:43.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.377 "hdgst": false, 00:35:43.377 "ddgst": false 00:35:43.377 }, 00:35:43.377 "method": "bdev_nvme_attach_controller" 00:35:43.377 },{ 00:35:43.377 "params": { 00:35:43.377 "name": "Nvme1", 00:35:43.377 "trtype": "tcp", 00:35:43.377 "traddr": "10.0.0.2", 00:35:43.377 "adrfam": "ipv4", 00:35:43.377 "trsvcid": "4420", 00:35:43.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:43.377 "hdgst": false, 00:35:43.377 "ddgst": false 00:35:43.377 }, 00:35:43.377 "method": "bdev_nvme_attach_controller" 00:35:43.377 }' 00:35:43.377 20:30:39 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:43.377 20:30:39 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:43.377 20:30:39 -- common/autotest_common.sh@1326 -- # break 00:35:43.377 20:30:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:43.377 20:30:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.377 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:43.377 ... 00:35:43.377 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:43.377 ... 00:35:43.377 fio-3.35 00:35:43.377 Starting 4 threads 00:35:43.377 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.377 [2024-04-25 20:30:41.228339] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:43.377 [2024-04-25 20:30:41.228444] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:48.648 00:35:48.648 filename0: (groupid=0, jobs=1): err= 0: pid=1793193: Thu Apr 25 20:30:46 2024 00:35:48.648 read: IOPS=2547, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5002msec) 00:35:48.649 slat (nsec): min=3708, max=36446, avg=7619.75, stdev=1997.97 00:35:48.649 clat (usec): min=1105, max=6669, avg=3119.75, stdev=526.94 00:35:48.649 lat (usec): min=1112, max=6686, avg=3127.37, stdev=526.97 00:35:48.649 clat percentiles (usec): 00:35:48.649 | 1.00th=[ 2073], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2769], 00:35:48.649 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3097], 00:35:48.649 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3884], 95.00th=[ 4293], 00:35:48.649 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5145], 99.95th=[ 5473], 00:35:48.649 | 99.99th=[ 5866] 00:35:48.649 bw ( KiB/s): min=19232, max=21632, per=24.26%, avg=20368.00, stdev=706.68, samples=9 00:35:48.649 iops : min= 2404, max= 2704, avg=2546.00, stdev=88.33, samples=9 00:35:48.649 lat (msec) : 2=0.52%, 4=90.87%, 10=8.61% 00:35:48.649 cpu : usr=96.46%, sys=2.52%, ctx=262, majf=0, minf=1636 00:35:48.649 IO depths : 1=0.1%, 2=0.7%, 4=70.7%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.649 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.649 issued rwts: total=12744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.649 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:48.649 filename0: (groupid=0, jobs=1): err= 0: pid=1793194: Thu Apr 25 20:30:46 2024 00:35:48.649 read: IOPS=2761, BW=21.6MiB/s (22.6MB/s)(108MiB/5002msec) 00:35:48.649 slat (nsec): min=4048, max=38243, avg=7530.01, stdev=1892.49 00:35:48.649 clat (usec): min=1274, max=6592, avg=2877.54, stdev=494.91 00:35:48.649 lat (usec): min=1280, max=6617, avg=2885.07, stdev=494.98 00:35:48.649 clat percentiles (usec): 00:35:48.649 | 1.00th=[ 1975], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2507], 00:35:48.649 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2933], 00:35:48.649 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3425], 95.00th=[ 3949], 00:35:48.649 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 5276], 99.95th=[ 6456], 00:35:48.649 | 99.99th=[ 6456] 00:35:48.649 bw ( KiB/s): min=21248, max=23568, per=26.32%, avg=22091.20, stdev=751.79, samples=10 00:35:48.649 iops : min= 2656, max= 2946, avg=2761.40, stdev=93.97, samples=10 00:35:48.649 lat (msec) : 2=1.30%, 4=93.81%, 10=4.89% 00:35:48.649 cpu : usr=97.00%, sys=2.50%, ctx=122, majf=0, minf=1637 00:35:48.649 IO depths : 1=0.1%, 2=1.6%, 4=67.4%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.649 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.649 issued rwts: total=13815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.649 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:48.649 filename1: (groupid=0, jobs=1): err= 0: pid=1793195: Thu Apr 25 20:30:46 2024 00:35:48.649 read: IOPS=2513, BW=19.6MiB/s (20.6MB/s)(98.2MiB/5001msec) 00:35:48.649 slat (usec): min=3, max=124, avg= 7.41, stdev= 2.00 00:35:48.649 clat (usec): min=966, max=5563, avg=3163.19, stdev=510.38 00:35:48.649 lat (usec): min=973, max=5571, avg=3170.59, stdev=510.41 00:35:48.649 clat percentiles (usec): 00:35:48.649 | 1.00th=[ 2278], 5.00th=[ 2606], 10.00th=[ 2704], 20.00th=[ 2802], 00:35:48.649 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3097], 00:35:48.649 | 70.00th=[ 3261], 80.00th=[ 3458], 90.00th=[ 3884], 95.00th=[ 4293], 00:35:48.649 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5407], 00:35:48.649 | 99.99th=[ 5538] 00:35:48.649 bw ( KiB/s): min=18800, max=20576, per=23.94%, avg=20097.78, stdev=520.46, samples=9 00:35:48.649 iops : min= 2350, max= 2572, avg=2512.22, stdev=65.06, samples=9 00:35:48.649 lat (usec) : 1000=0.02% 00:35:48.649 lat (msec) : 2=0.07%, 4=90.72%, 10=9.19% 00:35:48.649 cpu : usr=98.08%, sys=1.64%, ctx=6, majf=0, minf=1637 00:35:48.649 IO depths : 1=0.1%, 2=0.2%, 4=71.9%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.649 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.649 issued rwts: total=12571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.649 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:48.649 filename1: (groupid=0, jobs=1): err= 0: pid=1793196: Thu Apr 25 20:30:46 2024 00:35:48.649 read: IOPS=2670, BW=20.9MiB/s (21.9MB/s)(104MiB/5002msec) 00:35:48.649 slat (nsec): min=4031, max=35825, avg=7483.71, stdev=1779.21 00:35:48.649 clat (usec): min=1575, max=8050, avg=2976.50, stdev=527.91 00:35:48.649 lat (usec): min=1582, max=8070, avg=2983.98, stdev=527.92 00:35:48.649 clat percentiles (usec): 00:35:48.649 | 1.00th=[ 2040], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2606], 00:35:48.649 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2999], 00:35:48.649 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3654], 95.00th=[ 4113], 00:35:48.649 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5473], 99.95th=[ 8029], 00:35:48.649 | 99.99th=[ 8029] 00:35:48.649 bw ( KiB/s): min=20745, max=22496, per=25.45%, avg=21360.90, stdev=498.56, samples=10 00:35:48.649 iops : min= 2593, max= 2812, avg=2670.10, stdev=62.34, samples=10 00:35:48.649 lat (msec) : 2=0.67%, 4=92.50%, 10=6.83% 00:35:48.649 cpu : usr=97.54%, sys=2.10%, ctx=76, majf=0, minf=1635 00:35:48.649 IO depths : 1=0.1%, 2=0.8%, 4=69.9%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.649 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.649 issued rwts: total=13356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.649 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:48.649 00:35:48.649 Run status group 0 (all jobs): 00:35:48.649 READ: bw=82.0MiB/s (86.0MB/s), 19.6MiB/s-21.6MiB/s (20.6MB/s-22.6MB/s), io=410MiB (430MB), run=5001-5002msec 00:35:49.217 ----------------------------------------------------- 00:35:49.217 Suppressions used: 00:35:49.217 count bytes template 00:35:49.217 6 52 /usr/src/fio/parse.c 00:35:49.217 1 8 libtcmalloc_minimal.so 00:35:49.217 1 904 libcrypto.so 00:35:49.217 ----------------------------------------------------- 00:35:49.217 00:35:49.217 20:30:46 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:49.217 20:30:46 -- target/dif.sh@43 -- # local sub 00:35:49.217 20:30:46 -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.217 20:30:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:49.217 20:30:46 -- target/dif.sh@36 -- # local sub_id=0 00:35:49.217 20:30:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.217 20:30:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:49.217 20:30:46 -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 20:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:49.217 20:30:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:49.217 20:30:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:49.217 20:30:46 -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 20:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:49.217 20:30:46 -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.217 20:30:46 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:49.217 20:30:46 -- target/dif.sh@36 -- # local sub_id=1 00:35:49.217 20:30:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:49.217 20:30:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:49.217 20:30:46 -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 20:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:49.217 20:30:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:49.217 20:30:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:49.217 20:30:46 -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 20:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:49.217 00:35:49.217 real 0m26.403s 00:35:49.217 user 5m22.892s 00:35:49.217 sys 0m4.048s 00:35:49.217 20:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:49.217 20:30:46 -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 ************************************ 00:35:49.217 END TEST fio_dif_rand_params 00:35:49.217 ************************************ 00:35:49.217 20:30:46 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:49.217 20:30:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:49.217 20:30:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:49.217 20:30:46 -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 ************************************ 00:35:49.217 START TEST fio_dif_digest 00:35:49.217 ************************************ 00:35:49.217 20:30:47 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:35:49.217 20:30:47 -- target/dif.sh@123 -- # local NULL_DIF 00:35:49.217 20:30:47 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:49.217 20:30:47 -- target/dif.sh@125 -- # local hdgst ddgst 00:35:49.217 20:30:47 -- target/dif.sh@127 -- # NULL_DIF=3 00:35:49.218 20:30:47 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:49.218 20:30:47 -- target/dif.sh@127 -- # numjobs=3 00:35:49.218 20:30:47 -- target/dif.sh@127 -- # iodepth=3 00:35:49.218 20:30:47 -- target/dif.sh@127 -- # runtime=10 00:35:49.218 20:30:47 -- target/dif.sh@128 -- # hdgst=true 00:35:49.218 20:30:47 -- target/dif.sh@128 -- # ddgst=true 00:35:49.218 20:30:47 -- target/dif.sh@130 -- # create_subsystems 0 00:35:49.218 20:30:47 -- target/dif.sh@28 -- # local sub 00:35:49.218 20:30:47 -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.218 20:30:47 -- target/dif.sh@31 -- # create_subsystem 0 00:35:49.218 20:30:47 -- target/dif.sh@18 -- # local sub_id=0 00:35:49.218 20:30:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:49.218 20:30:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:49.218 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:35:49.218 bdev_null0 00:35:49.218 20:30:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:49.218 20:30:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:49.218 20:30:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:49.218 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:35:49.218 20:30:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:49.218 20:30:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:49.218 20:30:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:49.218 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:35:49.218 20:30:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:49.218 20:30:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:49.218 20:30:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:49.218 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:35:49.218 [2024-04-25 20:30:47.035118] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.218 20:30:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:49.218 20:30:47 -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:49.218 20:30:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.218 20:30:47 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.218 20:30:47 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:49.218 20:30:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:49.218 20:30:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:49.218 20:30:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:49.218 20:30:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:49.218 20:30:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.218 20:30:47 -- nvmf/common.sh@520 -- # config=() 00:35:49.218 20:30:47 -- common/autotest_common.sh@1320 -- # shift 00:35:49.218 20:30:47 -- nvmf/common.sh@520 -- # local subsystem config 00:35:49.218 20:30:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:49.218 20:30:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:49.218 20:30:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.218 20:30:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:49.218 { 00:35:49.218 "params": { 00:35:49.218 "name": "Nvme$subsystem", 00:35:49.218 "trtype": "$TEST_TRANSPORT", 00:35:49.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.218 "adrfam": "ipv4", 00:35:49.218 "trsvcid": "$NVMF_PORT", 00:35:49.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.218 "hdgst": ${hdgst:-false}, 00:35:49.218 "ddgst": ${ddgst:-false} 00:35:49.218 }, 00:35:49.218 "method": "bdev_nvme_attach_controller" 00:35:49.218 } 00:35:49.218 EOF 00:35:49.218 )") 00:35:49.218 20:30:47 -- target/dif.sh@82 -- # gen_fio_conf 00:35:49.218 20:30:47 -- target/dif.sh@54 -- # local file 00:35:49.218 20:30:47 -- target/dif.sh@56 -- # cat 00:35:49.218 20:30:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.218 20:30:47 -- nvmf/common.sh@542 -- # cat 00:35:49.218 20:30:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:49.218 20:30:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:49.218 20:30:47 -- nvmf/common.sh@544 -- # jq . 00:35:49.218 20:30:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:49.218 20:30:47 -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.218 20:30:47 -- nvmf/common.sh@545 -- # IFS=, 00:35:49.218 20:30:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:49.218 "params": { 00:35:49.218 "name": "Nvme0", 00:35:49.218 "trtype": "tcp", 00:35:49.218 "traddr": "10.0.0.2", 00:35:49.218 "adrfam": "ipv4", 00:35:49.218 "trsvcid": "4420", 00:35:49.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.218 "hdgst": true, 00:35:49.218 "ddgst": true 00:35:49.218 }, 00:35:49.218 "method": "bdev_nvme_attach_controller" 00:35:49.218 }' 00:35:49.218 20:30:47 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:49.218 20:30:47 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:49.218 20:30:47 -- common/autotest_common.sh@1326 -- # break 00:35:49.218 20:30:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:49.218 20:30:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.787 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:49.787 ... 00:35:49.787 fio-3.35 00:35:49.787 Starting 3 threads 00:35:49.787 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.354 [2024-04-25 20:30:48.058829] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:50.354 [2024-04-25 20:30:48.058890] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:36:00.387 00:36:00.387 filename0: (groupid=0, jobs=1): err= 0: pid=1794827: Thu Apr 25 20:30:58 2024 00:36:00.387 read: IOPS=284, BW=35.6MiB/s (37.3MB/s)(358MiB/10043msec) 00:36:00.387 slat (nsec): min=4517, max=26373, avg=8118.04, stdev=1137.80 00:36:00.387 clat (usec): min=8367, max=51587, avg=10510.73, stdev=1240.25 00:36:00.387 lat (usec): min=8375, max=51594, avg=10518.85, stdev=1240.27 00:36:00.387 clat percentiles (usec): 00:36:00.387 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:36:00.387 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:36:00.387 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:36:00.387 | 99.00th=[12125], 99.50th=[12387], 99.90th=[13173], 99.95th=[49546], 00:36:00.387 | 99.99th=[51643] 00:36:00.387 bw ( KiB/s): min=35072, max=37120, per=34.86%, avg=36578.70, stdev=535.84, samples=20 00:36:00.387 iops : min= 274, max= 290, avg=285.75, stdev= 4.18, samples=20 00:36:00.387 lat (msec) : 10=20.94%, 20=78.99%, 50=0.03%, 100=0.03% 00:36:00.387 cpu : usr=97.51%, sys=2.23%, ctx=14, majf=0, minf=1638 00:36:00.387 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.387 issued rwts: total=2860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:00.387 filename0: (groupid=0, jobs=1): err= 0: pid=1794829: Thu Apr 25 20:30:58 2024 00:36:00.387 read: IOPS=269, BW=33.7MiB/s (35.4MB/s)(338MiB/10004msec) 00:36:00.387 slat (nsec): min=4487, max=32792, avg=8296.06, stdev=1294.38 00:36:00.387 clat (usec): min=5151, max=22656, avg=11105.85, stdev=800.41 00:36:00.387 lat (usec): min=5158, max=22673, avg=11114.15, stdev=800.44 00:36:00.387 clat percentiles (usec): 00:36:00.387 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:36:00.387 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:36:00.387 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12256], 00:36:00.387 | 99.00th=[12911], 99.50th=[13173], 99.90th=[21365], 99.95th=[21365], 00:36:00.387 | 99.99th=[22676] 00:36:00.387 bw ( KiB/s): min=33792, max=35840, per=32.90%, avg=34525.00, stdev=484.89, samples=20 00:36:00.387 iops : min= 264, max= 280, avg=269.70, stdev= 3.80, samples=20 00:36:00.387 lat (msec) : 10=5.33%, 20=94.56%, 50=0.11% 00:36:00.387 cpu : usr=97.43%, sys=2.31%, ctx=14, majf=0, minf=1634 00:36:00.387 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.387 issued rwts: total=2700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:00.387 filename0: (groupid=0, jobs=1): err= 0: pid=1794830: Thu Apr 25 20:30:58 2024 00:36:00.387 read: IOPS=266, BW=33.3MiB/s (34.9MB/s)(334MiB/10045msec) 00:36:00.387 slat (nsec): min=4628, max=32343, avg=8256.89, stdev=1293.73 00:36:00.387 clat (usec): min=8838, max=52625, avg=11239.64, stdev=1265.28 00:36:00.387 lat (usec): min=8845, max=52632, avg=11247.90, stdev=1265.30 00:36:00.387 clat percentiles (usec): 00:36:00.387 | 1.00th=[ 9634], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:36:00.387 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:36:00.387 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:36:00.387 | 99.00th=[13042], 99.50th=[13435], 99.90th=[15008], 99.95th=[45876], 00:36:00.387 | 99.99th=[52691] 00:36:00.387 bw ( KiB/s): min=32768, max=35328, per=32.60%, avg=34214.40, stdev=696.27, samples=20 00:36:00.387 iops : min= 256, max= 276, avg=267.30, stdev= 5.44, samples=20 00:36:00.387 lat (msec) : 10=4.15%, 20=95.78%, 50=0.04%, 100=0.04% 00:36:00.387 cpu : usr=97.47%, sys=2.27%, ctx=14, majf=0, minf=1634 00:36:00.387 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:00.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:00.387 issued rwts: total=2675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:00.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:00.387 00:36:00.387 Run status group 0 (all jobs): 00:36:00.387 READ: bw=102MiB/s (107MB/s), 33.3MiB/s-35.6MiB/s (34.9MB/s-37.3MB/s), io=1029MiB (1079MB), run=10004-10045msec 00:36:01.375 ----------------------------------------------------- 00:36:01.375 Suppressions used: 00:36:01.375 count bytes template 00:36:01.375 5 44 /usr/src/fio/parse.c 00:36:01.375 1 8 libtcmalloc_minimal.so 00:36:01.375 1 904 libcrypto.so 00:36:01.375 ----------------------------------------------------- 00:36:01.375 00:36:01.375 20:30:58 -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:01.375 20:30:58 -- target/dif.sh@43 -- # local sub 00:36:01.375 20:30:58 -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.375 20:30:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:01.375 20:30:58 -- target/dif.sh@36 -- # local sub_id=0 00:36:01.375 20:30:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:01.375 20:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:01.375 20:30:58 -- common/autotest_common.sh@10 -- # set +x 00:36:01.375 20:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:01.375 20:30:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:01.376 20:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:01.376 20:30:58 -- common/autotest_common.sh@10 -- # set +x 00:36:01.376 20:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:01.376 00:36:01.376 real 0m11.973s 00:36:01.376 user 0m44.970s 00:36:01.376 sys 0m1.084s 00:36:01.376 20:30:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:01.376 20:30:58 -- common/autotest_common.sh@10 -- # set +x 00:36:01.376 ************************************ 00:36:01.376 END TEST fio_dif_digest 00:36:01.376 ************************************ 00:36:01.376 20:30:59 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:01.376 20:30:59 -- target/dif.sh@147 -- # nvmftestfini 00:36:01.376 20:30:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:01.376 20:30:59 -- nvmf/common.sh@116 -- # sync 00:36:01.376 20:30:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:01.376 20:30:59 -- nvmf/common.sh@119 -- # set +e 00:36:01.376 20:30:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:01.376 20:30:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:01.376 rmmod nvme_tcp 00:36:01.376 rmmod nvme_fabrics 00:36:01.376 rmmod nvme_keyring 00:36:01.376 20:30:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:01.376 20:30:59 -- nvmf/common.sh@123 -- # set -e 00:36:01.376 20:30:59 -- nvmf/common.sh@124 -- # return 0 00:36:01.376 20:30:59 -- nvmf/common.sh@477 -- # '[' -n 1782893 ']' 00:36:01.376 20:30:59 -- nvmf/common.sh@478 -- # killprocess 1782893 00:36:01.376 20:30:59 -- common/autotest_common.sh@926 -- # '[' -z 1782893 ']' 00:36:01.376 20:30:59 -- common/autotest_common.sh@930 -- # kill -0 1782893 00:36:01.376 20:30:59 -- common/autotest_common.sh@931 -- # uname 00:36:01.376 20:30:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:01.376 20:30:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1782893 00:36:01.376 20:30:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:01.376 20:30:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:01.376 20:30:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1782893' 00:36:01.376 killing process with pid 1782893 00:36:01.376 20:30:59 -- common/autotest_common.sh@945 -- # kill 1782893 00:36:01.376 20:30:59 -- common/autotest_common.sh@950 -- # wait 1782893 00:36:01.940 20:30:59 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:36:01.940 20:30:59 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:36:04.475 Waiting for block devices as requested 00:36:04.475 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:36:04.475 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:04.475 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:04.475 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:04.475 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:36:04.475 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:04.475 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:36:04.732 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:04.732 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:36:04.732 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:04.732 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:36:04.991 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:36:04.991 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:36:04.991 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:04.991 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:36:05.250 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:05.250 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:36:05.250 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:36:05.510 20:31:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:05.510 20:31:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:05.510 20:31:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:05.510 20:31:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:05.510 20:31:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.510 20:31:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:05.510 20:31:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.043 20:31:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:36:08.043 00:36:08.043 real 1m17.783s 00:36:08.043 user 8m9.904s 00:36:08.043 sys 0m16.019s 00:36:08.043 20:31:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:08.043 20:31:05 -- common/autotest_common.sh@10 -- # set +x 00:36:08.043 ************************************ 00:36:08.043 END TEST nvmf_dif 00:36:08.043 ************************************ 00:36:08.043 20:31:05 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:08.043 20:31:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:08.043 20:31:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:08.043 20:31:05 -- common/autotest_common.sh@10 -- # set +x 00:36:08.043 ************************************ 00:36:08.043 START TEST nvmf_abort_qd_sizes 00:36:08.043 ************************************ 00:36:08.043 20:31:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:08.043 * Looking for test storage... 00:36:08.043 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:36:08.043 20:31:05 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:36:08.043 20:31:05 -- nvmf/common.sh@7 -- # uname -s 00:36:08.043 20:31:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:08.043 20:31:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:08.043 20:31:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:08.043 20:31:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:08.043 20:31:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:08.043 20:31:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:08.043 20:31:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:08.043 20:31:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:08.043 20:31:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:08.043 20:31:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:08.043 20:31:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:36:08.043 20:31:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:36:08.043 20:31:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:08.043 20:31:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:08.043 20:31:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:36:08.043 20:31:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:36:08.043 20:31:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:08.043 20:31:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.043 20:31:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.043 20:31:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.043 20:31:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.043 20:31:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.043 20:31:05 -- paths/export.sh@5 -- # export PATH 00:36:08.043 20:31:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.043 20:31:05 -- nvmf/common.sh@46 -- # : 0 00:36:08.043 20:31:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:36:08.043 20:31:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:36:08.044 20:31:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:36:08.044 20:31:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:08.044 20:31:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:08.044 20:31:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:36:08.044 20:31:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:36:08.044 20:31:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:36:08.044 20:31:05 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:36:08.044 20:31:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:36:08.044 20:31:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:08.044 20:31:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:36:08.044 20:31:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:36:08.044 20:31:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:36:08.044 20:31:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.044 20:31:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:08.044 20:31:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.044 20:31:05 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:36:08.044 20:31:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:36:08.044 20:31:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:36:08.044 20:31:05 -- common/autotest_common.sh@10 -- # set +x 00:36:13.318 20:31:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:36:13.318 20:31:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:36:13.318 20:31:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:36:13.318 20:31:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:36:13.318 20:31:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:36:13.318 20:31:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:36:13.318 20:31:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:36:13.318 20:31:10 -- nvmf/common.sh@294 -- # net_devs=() 00:36:13.318 20:31:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:36:13.318 20:31:10 -- nvmf/common.sh@295 -- # e810=() 00:36:13.318 20:31:10 -- nvmf/common.sh@295 -- # local -ga e810 00:36:13.318 20:31:10 -- nvmf/common.sh@296 -- # x722=() 00:36:13.318 20:31:10 -- nvmf/common.sh@296 -- # local -ga x722 00:36:13.318 20:31:10 -- nvmf/common.sh@297 -- # mlx=() 00:36:13.318 20:31:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:36:13.318 20:31:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.318 20:31:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:36:13.318 20:31:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:36:13.318 20:31:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:36:13.318 20:31:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:36:13.318 Found 0000:27:00.0 (0x8086 - 0x159b) 00:36:13.318 20:31:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:36:13.318 20:31:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:36:13.318 Found 0000:27:00.1 (0x8086 - 0x159b) 00:36:13.318 20:31:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:36:13.318 20:31:10 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:36:13.318 20:31:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.318 20:31:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:36:13.318 20:31:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.318 20:31:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:36:13.318 Found net devices under 0000:27:00.0: cvl_0_0 00:36:13.318 20:31:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.318 20:31:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:36:13.318 20:31:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.318 20:31:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:36:13.318 20:31:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.318 20:31:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:36:13.318 Found net devices under 0000:27:00.1: cvl_0_1 00:36:13.318 20:31:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.318 20:31:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:36:13.318 20:31:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:36:13.318 20:31:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:36:13.318 20:31:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:36:13.318 20:31:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.318 20:31:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.318 20:31:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.318 20:31:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:36:13.318 20:31:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.318 20:31:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.318 20:31:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:36:13.318 20:31:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.318 20:31:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.318 20:31:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:36:13.318 20:31:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:36:13.318 20:31:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.318 20:31:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.318 20:31:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.318 20:31:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.318 20:31:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:36:13.318 20:31:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.318 20:31:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.318 20:31:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.318 20:31:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:36:13.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:36:13.318 00:36:13.318 --- 10.0.0.2 ping statistics --- 00:36:13.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.318 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:36:13.318 20:31:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:36:13.318 00:36:13.318 --- 10.0.0.1 ping statistics --- 00:36:13.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.318 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:36:13.318 20:31:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.318 20:31:10 -- nvmf/common.sh@410 -- # return 0 00:36:13.318 20:31:10 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:36:13.318 20:31:10 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:36:15.857 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:36:15.857 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:36:15.857 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:36:15.857 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:36:15.857 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:36:15.857 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:36:15.857 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:36:15.857 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:36:15.857 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:36:15.857 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:36:15.857 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:36:15.857 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:36:15.857 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:36:15.857 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:36:15.857 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:36:15.857 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:36:16.425 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:36:16.686 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:36:16.947 20:31:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:16.947 20:31:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:36:16.947 20:31:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:36:16.947 20:31:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:16.947 20:31:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:36:16.947 20:31:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:36:16.947 20:31:14 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:36:16.947 20:31:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:36:16.947 20:31:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:16.947 20:31:14 -- common/autotest_common.sh@10 -- # set +x 00:36:16.947 20:31:14 -- nvmf/common.sh@469 -- # nvmfpid=1804124 00:36:16.947 20:31:14 -- nvmf/common.sh@470 -- # waitforlisten 1804124 00:36:16.947 20:31:14 -- common/autotest_common.sh@819 -- # '[' -z 1804124 ']' 00:36:16.947 20:31:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.947 20:31:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:16.947 20:31:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.947 20:31:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:16.947 20:31:14 -- common/autotest_common.sh@10 -- # set +x 00:36:16.947 20:31:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:16.947 [2024-04-25 20:31:14.813121] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:16.947 [2024-04-25 20:31:14.813228] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.207 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.207 [2024-04-25 20:31:14.942476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:17.207 [2024-04-25 20:31:15.040705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:36:17.207 [2024-04-25 20:31:15.040883] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.207 [2024-04-25 20:31:15.040897] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.207 [2024-04-25 20:31:15.040906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.207 [2024-04-25 20:31:15.040984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.207 [2024-04-25 20:31:15.041087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:17.207 [2024-04-25 20:31:15.041188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.207 [2024-04-25 20:31:15.041198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:17.776 20:31:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:17.776 20:31:15 -- common/autotest_common.sh@852 -- # return 0 00:36:17.776 20:31:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:36:17.776 20:31:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:17.776 20:31:15 -- common/autotest_common.sh@10 -- # set +x 00:36:17.776 20:31:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.776 20:31:15 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:17.776 20:31:15 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:36:17.776 20:31:15 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:36:17.776 20:31:15 -- scripts/common.sh@311 -- # local bdf bdfs 00:36:17.776 20:31:15 -- scripts/common.sh@312 -- # local nvmes 00:36:17.776 20:31:15 -- scripts/common.sh@314 -- # [[ -n 0000:03:00.0 0000:c9:00.0 ]] 00:36:17.776 20:31:15 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:17.776 20:31:15 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:36:17.776 20:31:15 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:03:00.0 ]] 00:36:17.776 20:31:15 -- scripts/common.sh@322 -- # uname -s 00:36:17.776 20:31:15 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:36:17.776 20:31:15 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:36:17.776 20:31:15 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:36:17.776 20:31:15 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:36:17.776 20:31:15 -- scripts/common.sh@322 -- # uname -s 00:36:17.776 20:31:15 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:36:17.776 20:31:15 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:36:17.776 20:31:15 -- scripts/common.sh@327 -- # (( 2 )) 00:36:17.776 20:31:15 -- scripts/common.sh@328 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:36:17.776 20:31:15 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:36:17.776 20:31:15 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:03:00.0 00:36:17.776 20:31:15 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:36:17.776 20:31:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:17.776 20:31:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:17.776 20:31:15 -- common/autotest_common.sh@10 -- # set +x 00:36:17.776 ************************************ 00:36:17.776 START TEST spdk_target_abort 00:36:17.776 ************************************ 00:36:17.776 20:31:15 -- common/autotest_common.sh@1104 -- # spdk_target 00:36:17.776 20:31:15 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:17.776 20:31:15 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:36:17.776 20:31:15 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:03:00.0 -b spdk_target 00:36:17.776 20:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:17.776 20:31:15 -- common/autotest_common.sh@10 -- # set +x 00:36:18.035 spdk_targetn1 00:36:18.035 20:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:18.035 20:31:15 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:18.035 20:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:18.035 20:31:15 -- common/autotest_common.sh@10 -- # set +x 00:36:18.035 [2024-04-25 20:31:15.940449] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.035 20:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:18.035 20:31:15 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:36:18.035 20:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:18.035 20:31:15 -- common/autotest_common.sh@10 -- # set +x 00:36:18.035 20:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:18.035 20:31:15 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:36:18.035 20:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:18.035 20:31:15 -- common/autotest_common.sh@10 -- # set +x 00:36:18.035 20:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:18.035 20:31:15 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:36:18.035 20:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:18.035 20:31:15 -- common/autotest_common.sh@10 -- # set +x 00:36:18.294 [2024-04-25 20:31:15.968661] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.294 20:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:18.294 20:31:15 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:36:18.294 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.589 Initializing NVMe Controllers 00:36:21.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:36:21.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:36:21.589 Initialization complete. Launching workers. 00:36:21.589 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 17096, failed: 0 00:36:21.589 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1348, failed to submit 15748 00:36:21.589 success 803, unsuccess 545, failed 0 00:36:21.589 20:31:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:21.589 20:31:19 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:36:21.589 EAL: No free 2048 kB hugepages reported on node 1 00:36:24.882 [2024-04-25 20:31:22.496526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.496646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:36:24.882 [2024-04-25 20:31:22.596846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:36:24.882 Initializing NVMe Controllers 00:36:24.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:36:24.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:36:24.882 Initialization complete. Launching workers. 00:36:24.882 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8599, failed: 0 00:36:24.882 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1238, failed to submit 7361 00:36:24.882 success 307, unsuccess 931, failed 0 00:36:24.882 20:31:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.882 20:31:22 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:36:24.882 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.236 Initializing NVMe Controllers 00:36:28.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:36:28.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:36:28.236 Initialization complete. Launching workers. 00:36:28.236 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 41083, failed: 0 00:36:28.236 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2542, failed to submit 38541 00:36:28.236 success 612, unsuccess 1930, failed 0 00:36:28.236 20:31:25 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:36:28.236 20:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.236 20:31:25 -- common/autotest_common.sh@10 -- # set +x 00:36:28.236 20:31:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.236 20:31:25 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:28.236 20:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.236 20:31:25 -- common/autotest_common.sh@10 -- # set +x 00:36:29.177 20:31:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:29.177 20:31:26 -- target/abort_qd_sizes.sh@62 -- # killprocess 1804124 00:36:29.177 20:31:26 -- common/autotest_common.sh@926 -- # '[' -z 1804124 ']' 00:36:29.177 20:31:26 -- common/autotest_common.sh@930 -- # kill -0 1804124 00:36:29.177 20:31:26 -- common/autotest_common.sh@931 -- # uname 00:36:29.177 20:31:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:29.177 20:31:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1804124 00:36:29.177 20:31:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:29.177 20:31:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:29.177 20:31:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1804124' 00:36:29.177 killing process with pid 1804124 00:36:29.177 20:31:26 -- common/autotest_common.sh@945 -- # kill 1804124 00:36:29.177 20:31:26 -- common/autotest_common.sh@950 -- # wait 1804124 00:36:29.435 00:36:29.435 real 0m11.624s 00:36:29.435 user 0m47.024s 00:36:29.435 sys 0m1.175s 00:36:29.435 20:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:29.435 20:31:27 -- common/autotest_common.sh@10 -- # set +x 00:36:29.435 ************************************ 00:36:29.435 END TEST spdk_target_abort 00:36:29.435 ************************************ 00:36:29.435 20:31:27 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:36:29.435 20:31:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:29.435 20:31:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:29.435 20:31:27 -- common/autotest_common.sh@10 -- # set +x 00:36:29.435 ************************************ 00:36:29.435 START TEST kernel_target_abort 00:36:29.435 ************************************ 00:36:29.435 20:31:27 -- common/autotest_common.sh@1104 -- # kernel_target 00:36:29.435 20:31:27 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:36:29.435 20:31:27 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:36:29.435 20:31:27 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:36:29.435 20:31:27 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:36:29.435 20:31:27 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:36:29.435 20:31:27 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:29.435 20:31:27 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:29.435 20:31:27 -- nvmf/common.sh@627 -- # local block nvme 00:36:29.435 20:31:27 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:36:29.435 20:31:27 -- nvmf/common.sh@630 -- # modprobe nvmet 00:36:29.435 20:31:27 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:29.435 20:31:27 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:36:31.972 Waiting for block devices as requested 00:36:31.972 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:36:31.972 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:32.233 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:32.233 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:32.233 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:36:32.233 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:32.491 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:36:32.491 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:32.491 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:36:32.491 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:32.491 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:36:32.750 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:36:32.750 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:36:32.750 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:32.750 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:36:33.009 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:33.009 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:36:33.009 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:36:33.948 20:31:31 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:36:33.948 20:31:31 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:33.948 20:31:31 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:36:33.948 20:31:31 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:36:33.948 20:31:31 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:33.948 No valid GPT data, bailing 00:36:33.949 20:31:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:33.949 20:31:31 -- scripts/common.sh@393 -- # pt= 00:36:33.949 20:31:31 -- scripts/common.sh@394 -- # return 1 00:36:33.949 20:31:31 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:36:33.949 20:31:31 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:36:33.949 20:31:31 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:36:33.949 20:31:31 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:36:33.949 20:31:31 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:36:33.949 20:31:31 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:36:33.949 No valid GPT data, bailing 00:36:33.949 20:31:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:36:33.949 20:31:31 -- scripts/common.sh@393 -- # pt= 00:36:33.949 20:31:31 -- scripts/common.sh@394 -- # return 1 00:36:33.949 20:31:31 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:36:33.949 20:31:31 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n1 ]] 00:36:33.949 20:31:31 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:36:33.949 20:31:31 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:33.949 20:31:31 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:33.949 20:31:31 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:36:33.949 20:31:31 -- nvmf/common.sh@654 -- # echo 1 00:36:33.949 20:31:31 -- nvmf/common.sh@655 -- # echo /dev/nvme1n1 00:36:33.949 20:31:31 -- nvmf/common.sh@656 -- # echo 1 00:36:33.949 20:31:31 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:36:33.949 20:31:31 -- nvmf/common.sh@663 -- # echo tcp 00:36:33.949 20:31:31 -- nvmf/common.sh@664 -- # echo 4420 00:36:33.949 20:31:31 -- nvmf/common.sh@665 -- # echo ipv4 00:36:33.949 20:31:31 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:34.207 20:31:31 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:36:34.207 00:36:34.207 Discovery Log Number of Records 2, Generation counter 2 00:36:34.207 =====Discovery Log Entry 0====== 00:36:34.207 trtype: tcp 00:36:34.207 adrfam: ipv4 00:36:34.207 subtype: current discovery subsystem 00:36:34.207 treq: not specified, sq flow control disable supported 00:36:34.207 portid: 1 00:36:34.207 trsvcid: 4420 00:36:34.207 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:34.207 traddr: 10.0.0.1 00:36:34.207 eflags: none 00:36:34.207 sectype: none 00:36:34.207 =====Discovery Log Entry 1====== 00:36:34.207 trtype: tcp 00:36:34.207 adrfam: ipv4 00:36:34.207 subtype: nvme subsystem 00:36:34.207 treq: not specified, sq flow control disable supported 00:36:34.207 portid: 1 00:36:34.207 trsvcid: 4420 00:36:34.207 subnqn: kernel_target 00:36:34.207 traddr: 10.0.0.1 00:36:34.207 eflags: none 00:36:34.207 sectype: none 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:34.207 20:31:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:34.208 20:31:31 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:34.208 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.493 Initializing NVMe Controllers 00:36:37.493 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:37.493 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:37.493 Initialization complete. Launching workers. 00:36:37.493 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 67404, failed: 0 00:36:37.493 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 67404, failed to submit 0 00:36:37.493 success 0, unsuccess 67404, failed 0 00:36:37.493 20:31:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.493 20:31:34 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:37.493 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.785 Initializing NVMe Controllers 00:36:40.786 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:40.786 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:40.786 Initialization complete. Launching workers. 00:36:40.786 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 121353, failed: 0 00:36:40.786 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30726, failed to submit 90627 00:36:40.786 success 0, unsuccess 30726, failed 0 00:36:40.786 20:31:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.786 20:31:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:40.786 EAL: No free 2048 kB hugepages reported on node 1 00:36:43.321 Initializing NVMe Controllers 00:36:43.321 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:43.321 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:43.321 Initialization complete. Launching workers. 00:36:43.321 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 117211, failed: 0 00:36:43.321 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29310, failed to submit 87901 00:36:43.321 success 0, unsuccess 29310, failed 0 00:36:43.321 20:31:41 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:36:43.321 20:31:41 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:36:43.321 20:31:41 -- nvmf/common.sh@677 -- # echo 0 00:36:43.321 20:31:41 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:36:43.321 20:31:41 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:43.321 20:31:41 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:43.321 20:31:41 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:36:43.321 20:31:41 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:36:43.321 20:31:41 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:36:43.578 00:36:43.578 real 0m14.045s 00:36:43.578 user 0m6.739s 00:36:43.578 sys 0m3.570s 00:36:43.578 20:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:43.578 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:36:43.578 ************************************ 00:36:43.578 END TEST kernel_target_abort 00:36:43.578 ************************************ 00:36:43.578 20:31:41 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:36:43.578 20:31:41 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:36:43.578 20:31:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:43.578 20:31:41 -- nvmf/common.sh@116 -- # sync 00:36:43.578 20:31:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:43.578 20:31:41 -- nvmf/common.sh@119 -- # set +e 00:36:43.578 20:31:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:43.578 20:31:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:43.578 rmmod nvme_tcp 00:36:43.578 rmmod nvme_fabrics 00:36:43.578 rmmod nvme_keyring 00:36:43.578 20:31:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:43.578 20:31:41 -- nvmf/common.sh@123 -- # set -e 00:36:43.578 20:31:41 -- nvmf/common.sh@124 -- # return 0 00:36:43.578 20:31:41 -- nvmf/common.sh@477 -- # '[' -n 1804124 ']' 00:36:43.578 20:31:41 -- nvmf/common.sh@478 -- # killprocess 1804124 00:36:43.578 20:31:41 -- common/autotest_common.sh@926 -- # '[' -z 1804124 ']' 00:36:43.578 20:31:41 -- common/autotest_common.sh@930 -- # kill -0 1804124 00:36:43.578 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1804124) - No such process 00:36:43.578 20:31:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1804124 is not found' 00:36:43.578 Process with pid 1804124 is not found 00:36:43.578 20:31:41 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:36:43.578 20:31:41 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:36:46.110 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:36:46.111 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:36:46.111 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:36:46.111 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:36:46.111 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:36:46.111 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:36:46.111 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:36:46.111 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:36:46.111 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:36:46.111 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:36:46.111 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:36:46.111 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:36:46.111 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:36:46.111 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:36:46.111 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:36:46.111 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:36:46.111 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:36:46.111 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:36:46.369 20:31:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:46.369 20:31:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:46.369 20:31:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:46.369 20:31:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:46.369 20:31:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.369 20:31:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:46.369 20:31:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.909 20:31:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:36:48.909 00:36:48.909 real 0m40.838s 00:36:48.909 user 0m57.444s 00:36:48.909 sys 0m12.169s 00:36:48.909 20:31:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:48.909 20:31:46 -- common/autotest_common.sh@10 -- # set +x 00:36:48.909 ************************************ 00:36:48.909 END TEST nvmf_abort_qd_sizes 00:36:48.909 ************************************ 00:36:48.909 20:31:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:48.909 20:31:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:48.909 20:31:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:48.909 20:31:46 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:48.909 20:31:46 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:36:48.909 20:31:46 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:36:48.909 20:31:46 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:36:48.909 20:31:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:48.909 20:31:46 -- common/autotest_common.sh@10 -- # set +x 00:36:48.909 20:31:46 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:36:48.909 20:31:46 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:36:48.909 20:31:46 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:36:48.909 20:31:46 -- common/autotest_common.sh@10 -- # set +x 00:36:54.199 INFO: APP EXITING 00:36:54.199 INFO: killing all VMs 00:36:54.199 INFO: killing vhost app 00:36:54.199 INFO: EXIT DONE 00:36:56.101 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:36:56.101 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:36:56.101 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:36:56.101 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:36:56.101 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:36:56.101 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:36:56.101 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:36:56.101 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:36:56.101 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:36:56.101 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:36:56.101 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:36:56.101 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:36:56.101 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:36:56.101 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:36:56.101 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:36:56.101 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:36:56.101 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:36:56.101 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:36:58.696 Cleaning 00:36:58.696 Removing: /var/run/dpdk/spdk0/config 00:36:58.697 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:58.697 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:58.697 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:58.697 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:58.697 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:58.697 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:58.697 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:58.697 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:58.697 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:58.697 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:58.697 Removing: /var/run/dpdk/spdk1/config 00:36:58.697 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:58.697 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:58.697 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:58.697 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:58.697 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:58.697 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:58.697 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:58.697 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:58.697 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:58.697 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:58.697 Removing: /var/run/dpdk/spdk2/config 00:36:58.697 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:58.697 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:58.697 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:58.697 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:58.697 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:58.697 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:58.697 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:58.697 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:58.697 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:58.697 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:58.697 Removing: /var/run/dpdk/spdk3/config 00:36:58.697 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:58.697 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:58.697 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:58.697 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:58.697 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:58.697 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:58.697 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:58.697 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:58.697 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:58.697 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:58.697 Removing: /var/run/dpdk/spdk4/config 00:36:58.697 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:58.697 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:58.697 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:58.697 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:58.697 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:58.697 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:58.697 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:58.697 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:58.697 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:58.697 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:58.697 Removing: /dev/shm/nvmf_trace.0 00:36:58.697 Removing: /dev/shm/spdk_tgt_trace.pid1315438 00:36:58.697 Removing: /var/run/dpdk/spdk0 00:36:58.697 Removing: /var/run/dpdk/spdk1 00:36:58.697 Removing: /var/run/dpdk/spdk2 00:36:58.697 Removing: /var/run/dpdk/spdk3 00:36:58.697 Removing: /var/run/dpdk/spdk4 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1313218 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1315438 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1316241 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1317573 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1318592 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1318949 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1319418 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1319965 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1320323 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1320644 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1320957 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1321351 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1322196 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1326032 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1326374 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1326707 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1327001 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1327897 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1327951 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1328878 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1328981 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1329451 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1329523 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1329859 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1330153 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1330866 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1331180 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1331535 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1333694 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1335503 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1337348 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1339170 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1341255 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1343062 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1345160 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1346962 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1348897 00:36:58.697 Removing: /var/run/dpdk/spdk_pid1350862 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1352699 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1354766 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1356595 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1358735 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1361062 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1362861 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1364959 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1366757 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1368770 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1370664 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1372576 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1374565 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1376398 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1378458 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1380285 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1382214 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1384182 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1385989 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1387902 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1389878 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1391757 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1393778 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1396131 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1398050 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1400051 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1401917 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1403957 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1405755 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1407854 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1409651 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1411566 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1413550 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1415430 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1417495 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1419697 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1424012 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1518025 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1523140 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1534081 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1540388 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1544896 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1545501 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1550632 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1550953 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1555795 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1562419 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1565396 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1577282 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1587509 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1590231 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1591323 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1611021 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1615458 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1620400 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1622351 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1624583 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1624894 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1625197 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1625509 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1626163 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1628403 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1629534 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1630174 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1636769 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1643636 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1649652 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1688982 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1693853 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1702617 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1702621 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1707761 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1708069 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1708361 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1708804 00:36:58.956 Removing: /var/run/dpdk/spdk_pid1708967 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1709896 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1711968 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1713872 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1715857 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1717931 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1719822 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1726333 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1727063 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1728130 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1728984 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1734694 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1738475 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1744590 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1751342 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1758037 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1760122 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1761935 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1764024 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1766181 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1767071 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1767693 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1768524 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1769932 00:36:59.214 Removing: /var/run/dpdk/spdk_pid1777218 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1777224 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1782945 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1786037 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1788582 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1790208 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1792767 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1794407 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1804346 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1804955 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1805685 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1808693 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1809245 00:36:59.215 Removing: /var/run/dpdk/spdk_pid1809851 00:36:59.215 Clean 00:36:59.215 killing process with pid 1261744 00:37:07.336 killing process with pid 1261741 00:37:07.336 killing process with pid 1261743 00:37:07.336 killing process with pid 1261742 00:37:07.336 20:32:04 -- common/autotest_common.sh@1436 -- # return 0 00:37:07.336 20:32:04 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:37:07.336 20:32:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:07.336 20:32:04 -- common/autotest_common.sh@10 -- # set +x 00:37:07.336 20:32:04 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:37:07.336 20:32:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:07.336 20:32:04 -- common/autotest_common.sh@10 -- # set +x 00:37:07.336 20:32:04 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:37:07.336 20:32:04 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:37:07.336 20:32:04 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:37:07.336 20:32:04 -- spdk/autotest.sh@394 -- # hash lcov 00:37:07.336 20:32:04 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:07.336 20:32:04 -- spdk/autotest.sh@396 -- # hostname 00:37:07.336 20:32:04 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-03 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:37:07.336 geninfo: WARNING: invalid characters removed from testname! 00:37:29.309 20:32:25 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:29.309 20:32:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:30.694 20:32:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:31.637 20:32:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:33.023 20:32:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:34.411 20:32:32 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:35.370 20:32:33 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:35.632 20:32:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:37:35.632 20:32:33 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:35.632 20:32:33 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.632 20:32:33 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.632 20:32:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.632 20:32:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.632 20:32:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.632 20:32:33 -- paths/export.sh@5 -- $ export PATH 00:37:35.632 20:32:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.632 20:32:33 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:37:35.632 20:32:33 -- common/autobuild_common.sh@435 -- $ date +%s 00:37:35.632 20:32:33 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714069953.XXXXXX 00:37:35.632 20:32:33 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714069953.Q3rkXE 00:37:35.632 20:32:33 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:37:35.632 20:32:33 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:37:35.632 20:32:33 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:37:35.632 20:32:33 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:35.632 20:32:33 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:35.632 20:32:33 -- common/autobuild_common.sh@451 -- $ get_config_params 00:37:35.632 20:32:33 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:37:35.632 20:32:33 -- common/autotest_common.sh@10 -- $ set +x 00:37:35.632 20:32:33 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:37:35.632 20:32:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:37:35.632 20:32:33 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:37:35.632 20:32:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:35.632 20:32:33 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:35.632 20:32:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:35.632 20:32:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:35.632 20:32:33 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:35.632 20:32:33 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:35.632 20:32:33 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:37:35.632 20:32:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:35.632 + [[ -n 1218838 ]] 00:37:35.632 + sudo kill 1218838 00:37:35.642 [Pipeline] } 00:37:35.658 [Pipeline] // stage 00:37:35.664 [Pipeline] } 00:37:35.680 [Pipeline] // timeout 00:37:35.685 [Pipeline] } 00:37:35.701 [Pipeline] // catchError 00:37:35.705 [Pipeline] } 00:37:35.720 [Pipeline] // wrap 00:37:35.725 [Pipeline] } 00:37:35.738 [Pipeline] // catchError 00:37:35.746 [Pipeline] stage 00:37:35.748 [Pipeline] { (Epilogue) 00:37:35.760 [Pipeline] catchError 00:37:35.761 [Pipeline] { 00:37:35.774 [Pipeline] echo 00:37:35.775 Cleanup processes 00:37:35.780 [Pipeline] sh 00:37:36.065 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:37:36.065 1824962 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:37:36.077 [Pipeline] sh 00:37:36.467 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:37:36.467 ++ grep -v 'sudo pgrep' 00:37:36.467 ++ awk '{print $1}' 00:37:36.467 + sudo kill -9 00:37:36.467 + true 00:37:36.481 [Pipeline] sh 00:37:36.771 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:46.761 [Pipeline] sh 00:37:47.048 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:47.049 Artifacts sizes are good 00:37:47.063 [Pipeline] archiveArtifacts 00:37:47.071 Archiving artifacts 00:37:47.314 [Pipeline] sh 00:37:47.602 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:37:47.617 [Pipeline] cleanWs 00:37:47.628 [WS-CLEANUP] Deleting project workspace... 00:37:47.628 [WS-CLEANUP] Deferred wipeout is used... 00:37:47.635 [WS-CLEANUP] done 00:37:47.637 [Pipeline] } 00:37:47.656 [Pipeline] // catchError 00:37:47.668 [Pipeline] sh 00:37:47.953 + logger -p user.info -t JENKINS-CI 00:37:47.963 [Pipeline] } 00:37:47.978 [Pipeline] // stage 00:37:47.984 [Pipeline] } 00:37:48.000 [Pipeline] // node 00:37:48.006 [Pipeline] End of Pipeline 00:37:48.044 Finished: SUCCESS